A small shim that the mountpoint-s3-crt crate uses to connect to a logging implementation. The
CRT’s logging implementation uses varargs, but Rust hasn’t stablized those, so we need a small C
trampoline to translate varargs to Rust strings.
struct aws_atomic_var represents an atomic variable - a value which can hold an integer or pointer
that can be manipulated atomically. struct aws_atomic_vars should normally only be manipulated
with atomics methods defined in this header.
Represents a length-delimited binary string or buffer. If byte buffer points
to constant memory or memory that should otherwise not be freed by this
struct, set allocator to NULL and free function will be a no-op.
Args for creating a new channel.
event_loop to use for IO and tasks. on_setup_completed will be invoked when
the setup process is finished It will be executed in the event loop’s thread.
on_shutdown_completed will be executed upon channel shutdown.
Configuration options for a provider that functions as a caching decorator. Credentials sourced through this
provider will be cached within it until their expiration time. When the cached credentials expire, new
credentials will be fetched when next queried.
Configuration options for a provider that queries, in order, a list of providers. This provider uses the
first set of credentials successfully queried. Providers are queried one at a time; a provider is not queried
until the preceding provider has failed to source credentials.
Configuration options for a provider that sources credentials from the aws config and credentials files
(by default ~/.aws/config and ~/.aws/credentials)
Configuration options for the STS credentials provider.
STS Credentials Provider will try to automatically resolve the region and use a regional STS endpoint if successful.
The region resolution order is the following:
Pattern-struct that functions as a base “class” for all statistics structures. To conform
to the pattern, a statistics structure must have its first member be the category. In that
case it becomes “safe” to cast from aws_crt_statistics_base to the specific statistics structure
based on the category value.
Represents an element in the hash table. Various operations on the hash
table may provide pointers to elements stored within the hash table;
generally, calling code may alter value, but must not alter key (or any
information used to compute key’s hash code).
A stream exists for the duration of a request/response exchange.
A client creates a stream to send a request and receive a response.
A server creates a stream to receive a request and send a response.
In http/2, a push-promise stream can be sent by a server and received by a client.
Base class for input streams.
Note: when you implement one input stream, the ref_count needs to be initialized to clean up the resource when
reaches to zero.
Options for aws_logger_init_standard().
Set filename to open a file for logging and close it when the logger cleans up.
Set file to use a file that is already open, such as stderr or stdout.
(Optional)
Use a cached merged profile collection. A merge collection has both config file
(/.aws/config) and credentials file based profile collection (/.aws/credentials) using
aws_profile_collection_new_from_merge.
If this option is provided, config_file_name_override and credentials_file_name_override will be ignored.
Generic memory pool interface.
Allows consumers of aws-c-s3 to override how buffer allocation for part buffers is done.
Refer to docs/memory_aware_request_execution.md for details on how default implementation works.
WARNING: this is currently experimental feature and does not provide API stability guarantees so should be used with
caution. At highlevel the flow is as follows:
Arguments to setup a server socket listener which will also negotiate and configure TLS.
This creates a socket listener bound to host and ‘port’ using socket options options, and TLS options
tls_options. incoming_callback will be invoked once an incoming channel is ready for use and TLS is
finished negotiating, or if an error is encountered. shutdown_callback will be invoked once the channel has
shutdown. destroy_callback will be invoked after the server socket listener is destroyed, and all associated
connections and channels have finished shutting down. Immediately after the shutdown_callback returns, the channel
is cleaned up automatically. All callbacks are invoked in the thread of the event-loop that listener is assigned to.
Hash table data structure. This module provides an automatically resizing
hash table implementation for general purpose use. The hash table stores a
mapping between void * keys and values; it is expected that in most cases,
these will point to a structure elsewhere in the heap, instead of inlining a
key or value into the hash table element itself.
Header block type.
INFORMATIONAL: Header block for 1xx informational (interim) responses.
MAIN: Main header block sent with request or response.
TRAILING: Headers sent after the body of a request or response.
A Meta Request represents a group of generated requests that are being done on behalf of the
original request. For example, one large GetObject request can be transformed into a series
of ranged GetObject requests that are executed in parallel to improve throughput.
Specifies the join strategy used on an aws_thread, which in turn controls whether or not a thread participates
in the managed thread system. The managed thread system provides logic to guarantee a join on all participating
threads at the cost of laziness (the user cannot control when joins happen).
The SHA-256 of an empty string:
‘e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855’
For use with aws_signing_config_aws.signed_body_value.
Adds [num] arguments (expected to be of size_t), and returns the result in *r.
If the result overflows, returns AWS_OP_ERR; otherwise returns AWS_OP_SUCCESS.
Compare an array and a null-terminated string.
Returns true if their contents are equivalent.
The array should NOT contain a null-terminator, or the comparison will always return false.
NULL may be passed as the array pointer if its length is declared to be 0.
Perform a case-insensitive string comparison of an array and a null-terminated string.
Return whether their contents are equivalent.
The array should NOT contain a null-terminator, or the comparison will always return false.
NULL may be passed as the array pointer if its length is declared to be 0.
The “C” locale is used for comparing upper and lowercase letters.
Data is assumed to be ASCII text, UTF-8 will work fine too.
Perform a case-insensitive string comparison of two arrays.
Return whether their contents are equivalent.
NULL may be passed as the array pointer if its length is declared to be 0.
The “C” locale is used for comparing upper and lowercase letters.
Data is assumed to be ASCII text, UTF-8 will work fine too.
A convenience function for sorting lists of (const struct aws_string *) elements. This can be used as a
comparator for aws_array_list_sort. It is just a simple wrapper around aws_string_compare.
Copies the elements from from to to. If to is in static mode, it must at least be the same length as from. Any data
in to will be overwritten in this copy.
Ensures that the array list has enough capacity to store a value at the specified index. If there is not already
enough capacity, and the list is in dynamic mode, this function will attempt to allocate more memory, expanding the
list. In static mode, if ‘index’ is beyond the maximum index, AWS_ERROR_INVALID_INDEX will be raised.
Deletes the element this index in the list if it exists.
If element does not exist, AWS_ERROR_INVALID_INDEX will be raised.
This call results in shifting all remaining elements towards the front.
Avoid this call unless that is intended behavior.
Initializes an array list with an array of size initial_item_allocation * item_size. In this mode, the array size
will grow by a factor of 2 upon insertion if space is not available. initial_item_allocation is the number of
elements you want space allocated for. item_size is the size of each element in bytes. Mixing items types is not
supported by this API.
Initializes an array list with a preallocated array of void *. item_count is the number of elements in the array,
and item_size is the size in bytes of each element. Mixing items types is not supported
by this API. Once this list is full, new items will be rejected.
Initializes an array list with a preallocated array of already-initialized elements. item_count is the number of
elements in the array, and item_size is the size in bytes of each element.
Deletes the element at the front of the list if it exists. If list is empty, AWS_ERROR_LIST_EMPTY will be raised.
This call results in shifting all of the elements at the end of the array to the front. Avoid this call unless that
is intended behavior.
Delete N elements from the front of the list.
Remaining elements are shifted to the front of the list.
If the list has less than N elements, the list is cleared.
This call is more efficient than calling aws_array_list_pop_front() N times.
Pushes the memory pointed to by val onto the front of internal list.
This call results in shifting all of the elements in the list. Avoid this call unless that
is intended behavior.
Copies the the memory pointed to by val into the array at index. If in dynamic mode, the size will grow by a factor
of two when the array is full. In static mode, AWS_ERROR_INVALID_INDEX will be raised if the index is past the bounds
of the array.
Read once from the async stream into the buffer.
The read completes when at least 1 byte is read, the buffer is full, or EOF is reached.
Depending on implementation, the read could complete at any time.
It may complete synchronously. It may complete on another thread.
Returns a future, which will contain an error code if something went wrong,
or a result bool indicating whether EOF has been reached.
Read repeatedly from the async stream until the buffer is full, or EOF is reached.
Depending on implementation, this could complete at any time.
It may complete synchronously. It may complete on another thread.
Returns a future, which will contain an error code if something went wrong,
or a result bool indicating whether EOF has been reached.
Atomically compares *var to *expected; if they are equal, atomically sets *var = desired. Otherwise, *expected is set
to the value in *var. Uses sequentially consistent memory ordering, regardless of success or failure.
Returns true if the compare was successful and the variable updated to desired.
Atomically compares *var to *expected; if they are equal, atomically sets *var = desired. Otherwise, *expected is set
to the value in *var. On success, the memory ordering used was order_success; otherwise, it was order_failure.
order_failure must be no stronger than order_success, and must not be release or acq_rel.
Returns true if the compare was successful and the variable updated to desired.
Atomically compares *var to *expected; if they are equal, atomically sets *var = desired. Otherwise, *expected is set
to the value in *var. Uses sequentially consistent memory ordering, regardless of success or failure.
Returns true if the compare was successful and the variable updated to desired.
Atomically compares *var to *expected; if they are equal, atomically sets *var = desired. Otherwise, *expected is set
to the value in *var. On success, the memory ordering used was order_success; otherwise, it was order_failure.
order_failure must be no stronger than order_success, and must not be release or acq_rel.
Returns true if the compare was successful and the variable updated to desired.
Initializes an atomic variable with an integer value. This operation should be done before any
other operations on this atomic variable, and must be done before attempting any parallel operations.
Initializes an atomic variable with a pointer value. This operation should be done before any
other operations on this atomic variable, and must be done before attempting any parallel operations.
Provides the same reordering guarantees as an atomic operation with the specified memory order, without
needing to actually perform an atomic operation.
Copies from to to. If to is too small, AWS_ERROR_DEST_COPY_TOO_SMALL will be
returned. dest->len will contain the amount of data actually copied to dest.
Copy contents of cursor to buffer, then update cursor to reference the memory stored in the buffer.
If buffer is too small, AWS_ERROR_DEST_COPY_TOO_SMALL will be returned.
Copies a single byte into to. If to is too small, the buffer will be grown appropriately and
the old contents copied over, before the byte is appended.
Copies a single byte into to. If to is too small, the buffer will be grown appropriately and
the old contents copied over, before the byte is appended.
Writes the uri decoding of a UTF-8 cursor to a buffer,
replacing %xx escapes by their single byte equivalent.
For example, reading “a%20b_c” would write “a b_c”.
Writes the uri query param encoding (passthrough alnum + ‘-’ ‘_’ ‘~’ ‘.’) of a UTF-8 cursor to a buffer
For example, reading “a b_c” would write “a%20b_c”.
Copies from to to while converting bytes via the passed in lookup table.
If to is too small, AWS_ERROR_DEST_COPY_TOO_SMALL will be
returned. to->len will contain its original size plus the amount of data actually copied to to.
Concatenates a variable number of struct aws_byte_buf * into destination.
Number of args must be greater than 1. If dest is too small,
AWS_ERROR_DEST_COPY_TOO_SMALL will be returned. dest->len will contain the
amount of data actually copied to dest.
Compare an aws_byte_buf and a null-terminated string.
Returns true if their contents are equivalent.
The buffer should NOT contain a null-terminator, or the comparison will always return false.
Perform a case-insensitive string comparison of an aws_byte_buf and a null-terminated string.
Return whether their contents are equivalent.
The buffer should NOT contain a null-terminator, or the comparison will always return false.
The “C” locale is used for comparing upper and lowercase letters.
Data is assumed to be ASCII text, UTF-8 will work fine too.
Perform a case-insensitive string comparison of two aws_byte_buf structures.
Return whether their contents are equivalent.
The “C” locale is used for comparing upper and lowercase letters.
Data is assumed to be ASCII text, UTF-8 will work fine too.
Init buffer with contents of multiple cursors, and update cursors to reference the memory stored in the buffer.
Each cursor arg must be an struct aws_byte_cursor *. NULL must be passed as the final arg.
NOTE: Do not append/grow/resize buffers initialized this way, or the cursors will end up referencing invalid memory.
Returns AWS_OP_SUCCESS in case of success.
AWS_OP_ERR is returned if memory can’t be allocated or the total cursor length exceeds SIZE_MAX.
Initializes an aws_byte_buf structure base on another valid one.
Requires: *src and *allocator are valid objects.
Ensures: *dest is a valid aws_byte_buf with a new backing array dest->buffer
which is a copy of the elements from src->buffer.
Copies src buffer into dest and sets the correct len and capacity.
A new memory zone is allocated for dest->buffer. When dest is no longer needed it will have to be cleaned-up using
aws_byte_buf_clean_up(dest).
Dest capacity and len will be equal to the src len. Allocator of the dest will be identical with parameter allocator.
If src buffer is null the dest will have a null buffer with a len and a capacity of 0
Returns AWS_OP_SUCCESS in case of success or AWS_OP_ERR when memory can’t be allocated.
Reads ‘filename’ into ‘out_buf’. If successful, ‘out_buf’ is allocated and filled with the data;
It is your responsibility to call ‘aws_byte_buf_clean_up()’ on it. Otherwise, ‘out_buf’ remains
unused. In the very unfortunate case where some API needs to treat out_buf as a c_string, a null terminator
is appended, but is not included as part of the length field.
Same as aws_byte_buf_init_from_file(), but for reading “special files” like /proc/cpuinfo.
These files don’t accurately report their size, so size_hint is used as initial buffer size,
and the buffer grows until the while file is read.
Evaluates the set of properties that define the shape of all valid aws_byte_buf structures.
It is also a cheap check, in the sense it run in constant time (i.e., no loops or recursion).
Resets the len of the buffer to 0, but does not free the memory. The buffer can then be reused.
Optionally zeroes the contents, if the “zero_contents” flag is true.
Writes low 24-bits (3 bytes) of an unsigned integer in network byte order (big endian) to buffer.
Ex: If x is 0x00AABBCC then {0xAA, 0xBB, 0xCC} is written to buffer.
Tests if the given aws_byte_cursor has at least len bytes remaining. If so,
*buf is advanced by len bytes (incrementing ->ptr and decrementing ->len),
and an aws_byte_cursor referring to the first len bytes of the original *buf
is returned. Otherwise, an aws_byte_cursor with ->ptr = NULL, ->len = 0 is
returned.
Behaves identically to aws_byte_cursor_advance, but avoids speculative
execution potentially reading out-of-bounds pointers (by returning an
empty ptr in such speculated paths).
Perform a case-insensitive string comparison of an aws_byte_cursor and an aws_byte_buf.
Return whether their contents are equivalent.
The “C” locale is used for comparing upper and lowercase letters.
Data is assumed to be ASCII text, UTF-8 will work fine too.
Compare an aws_byte_cursor and a null-terminated string.
Returns true if their contents are equivalent.
The cursor should NOT contain a null-terminator, or the comparison will always return false.
Perform a case-insensitive string comparison of an aws_byte_cursor and a null-terminated string.
Return whether their contents are equivalent.
The cursor should NOT contain a null-terminator, or the comparison will always return false.
The “C” locale is used for comparing upper and lowercase letters.
Data is assumed to be ASCII text, UTF-8 will work fine too.
Perform a case-insensitive string comparison of two aws_byte_cursor structures.
Return whether their contents are equivalent.
The “C” locale is used for comparing upper and lowercase letters.
Data is assumed to be ASCII text, UTF-8 will work fine too.
Search for an exact byte match inside a cursor. The first match will be returned. Returns AWS_OP_SUCCESS
on successful match and first_find will be set to the offset in input_str, and length will be the remaining length
from input_str past the returned offset. If the match was not found, AWS_OP_ERR will be returned and
AWS_ERROR_STRING_MATCH_NOT_FOUND will be raised.
Evaluates the set of properties that define the shape of all valid aws_byte_cursor structures.
It is also a cheap check, in the sense it runs in constant time (i.e., no loops or recursion).
No copies, no buffer allocations. Iterates over input_str, and returns the
next substring between split_on instances relative to previous substr.
Behaves similar to strtok with substr being used as state for next split.
Reads an unsigned 24-bit value (3 bytes) in network byte order from cur,
and places it in host byte order into 32-bit var.
Ex: if cur’s next 3 bytes are {0xAA, 0xBB, 0xCC}, then var becomes 0x00AABBCC.
Reads 2 hex characters from ASCII/UTF-8 text to produce an 8-bit number.
Accepts both lowercase ‘a’-‘f’ and uppercase ‘A’-‘F’.
For example: “0F” produces 15.
No copies, no buffer allocations. Fills in output with a list of
aws_byte_cursor instances where buffer is an offset into the input_str and
len is the length of that string in the original buffer.
No copies, no buffer allocations. Fills in output with a list of aws_byte_cursor instances where buffer is
an offset into the input_str and len is the length of that string in the original buffer. N is the max number of
splits, if this value is zero, it will add all splits to the output.
Return true if the input starts with the prefix (case-insensitive).
The “C” locale is used for comparing upper and lowercase letters.
Data is assumed to be ASCII text, UTF-8 will work fine too.
Prevent a channel’s memory from being freed.
Any number of users may acquire a hold to prevent a channel and its handlers from being unexpectedly freed.
Any user which acquires a hold must release it via aws_channel_release_hold().
Memory will be freed once all holds are released and aws_channel_destroy() has been called.
Acquires a message from the event loop’s message pool. size_hint is merely a hint, it may be smaller than you
requested and you are responsible for checking the bounds of it. If the returned message is not large enough, you
must send multiple messages. This cannot fail, it never returns NULL.
Mark the channel, along with all slots and handlers, for destruction.
Must be called after shutdown has completed.
Can be called from any thread assuming ‘aws_channel_shutdown()’ has completed.
Note that memory will not be freed until all users which acquired holds on the channel via
aws_channel_acquire_hold(), release them via aws_channel_release_hold().
Allocates new channel, Unless otherwise specified all functions for channels and channel slots must be executed
within that channel’s event-loop’s thread. channel_options are copied.
Schedules a task to run on the event loop at the specified time.
This is the ideal way to move a task into the correct thread. It’s also handy for context switches.
Use aws_channel_current_clock_time() to get the current time in nanoseconds.
This function is safe to call from any thread.
Schedules a task to run on the event loop as soon as possible.
This is the ideal way to move a task into the correct thread. It’s also handy for context switches.
This function is safe to call from any thread.
Instrument a channel with a statistics handler. While instrumented with a statistics handler, the channel
will periodically report per-channel-handler-specific statistics about handler performance and state.
Initiates shutdown of the channel. Shutdown will begin with the left-most slot. Each handler will invoke
‘aws_channel_slot_on_handler_shutdown_complete’ once they’ve finished their shutdown process for the read direction.
Once the right-most slot has shutdown in the read direction, the process will start shutting down starting on the
right-most slot. Once the left-most slot has shutdown in the write direction, ‘callbacks->shutdown_completed’ will be
invoked in the event loop’s thread.
Convenience function that invokes aws_channel_acquire_message_from_pool(),
asking for the largest reasonable DATA message that can be sent in the write direction,
with upstream overhead accounted for. This cannot fail, it never returns NULL.
Fetches the downstream read window. This gives you the information necessary to honor the read window. If you call
send_message() and it exceeds this window, the message will be rejected.
inserts ‘to_add’ to the position immediately to the left of slot. Note that the first call to
aws_channel_slot_new() adds it to the channel implicitly.
inserts ‘to_add’ to the position immediately to the right of slot. Note that the first call to
aws_channel_slot_new() adds it to the channel implicitly.
Allocates and initializes a new slot for use with the channel. If this is the first slot in the channel, it will
automatically be added to the channel as the first slot. For all subsequent calls on a given channel, the slot will
need to be added to the channel via. the aws_channel_slot_insert_right(), aws_channel_slot_insert_end(), and
aws_channel_slot_insert_left() APIs.
Returns true if the caller is on the event loop’s thread. If false, you likely need to use
aws_channel_schedule_task(). This function is safe to call from any thread.
A way for external processes to force a read by the data-source channel handler. Necessary in certain cases, like
when a server channel finishes setting up its initial handlers, a read may have already been triggered on the
socket (the client’s CLIENT_HELLO tls payload, for example) and absent further data/notifications, this data
would never get processed.
Compute an aws_checksum corresponding to the provided enum, passing a function pointer around instead of using a
conditional would be faster, but would be a negligible improvement compared to the cost of processing data twice
which would be the only time this function would be used, and would be harder to follow.
Helper stream that takes in a stream to keep track of the checksum of the underlying stream during read.
Invoke aws_checksum_stream_finalize_checksum to get the checksum of the data has been read so far.
Helper stream that takes in a stream and the checksum context to help finalize the checksum from the underlying
stream.
The context will be only finalized when the checksum stream has read to the end of stream.
The entry point function to perform a CRC32 (Ethernet, gzip) computation.
Selects a suitable implementation based on hardware capabilities.
Pass 0 in the previousCrc32 parameter as an initial value unless continuing
to update a running crc in a subsequent call.
Combines two CRC32 (Ethernet, gzip) checksums computed over separate data blocks.
This is equivalent to computing the CRC32 of the concatenated data blocks without
having to re-scan the data.
The entry point function to perform a CRC32 (Ethernet, gzip) computation.
Supports buffer lengths up to size_t max.
Selects a suitable implementation based on hardware capabilities.
Pass 0 in the previousCrc32 parameter as an initial value unless continuing
to update a running crc in a subsequent call.
The entry point function to perform a Castagnoli CRC32c (iSCSI) computation.
Selects a suitable implementation based on hardware capabilities.
Pass 0 in the previousCrc32 parameter as an initial value unless continuing
to update a running crc in a subsequent call.
Combines two CRC32C (Castagnoli, iSCSI) checksums computed over separate data blocks.
This is equivalent to computing the CRC32C of the concatenated data blocks without
having to re-scan the data.
The entry point function to perform a Castagnoli CRC32c (iSCSI) computation.
Supports buffer lengths up to size_t max.
Selects a suitable implementation based on hardware capabilities.
Pass 0 in the previousCrc32 parameter as an initial value unless continuing
to update a running crc in a subsequent call.
The entry point function to perform a CRC64-NVME (a.k.a. CRC64-Rocksoft) computation.
Selects a suitable implementation based on hardware capabilities.
Pass 0 in the previousCrc64 parameter as an initial value unless continuing
to update a running crc in a subsequent call.
There are many variants of CRC64 algorithms. This CRC64 variant is bit-reflected (based on
the non bit-reflected polynomial 0xad93d23594c93659) and inverts the CRC input and output bits.
Combines two CRC64-NVME (CRC64-Rocksoft) checksums computed over separate data blocks.
This is equivalent to computing the CRC64-NVME of the concatenated data blocks without
having to re-scan the data.
The entry point function to perform a CRC64-NVME (a.k.a. CRC64-Rocksoft) computation.
Supports buffer lengths up to size_t max.
Selects a suitable implementation based on hardware capabilities.
Pass 0 in the previousCrc64 parameter as an initial value unless continuing
to update a running crc in a subsequent call.
There are many variants of CRC64 algorithms. This CRC64 variant is bit-reflected (based on
the non bit-reflected polynomial 0xad93d23594c93659) and inverts the CRC input and output bits.
Get the elliptic curve key associated with this set of credentials
@param credentials credentials to get the the elliptic curve key for
@return the elliptic curve key associated with the credentials, or NULL if no key is associated with
these credentials
Creates a set of AWS credentials that includes an ECC key pair. These credentials do not have a value for
the secret access key; the ecc key takes over that field’s role in sigv4a signing.
Creates a provider that sources credentials from an ordered sequence of providers, with the overall result
being from the first provider to return a valid set of credentials
Creates a provider that sources credentials from the ecs role credentials service
This function doesn’t read anything from the environment and requires everything to be explicitly passed in.
If you need to read properties from the environment, use the aws_credentials_provider_new_ecs_from_environment.
Creates a provider that sources credentials from key-value profiles loaded from the aws credentials
file (“/.aws/credentials” by default) and the aws config file (“/.aws/config” by
default)
Creates a provider that assumes an IAM role via. STS AssumeRole() API. This provider will fetch new credentials
upon each call to aws_credentials_provider_get_credentials().
completely destroys a statistics handler. The handler’s cleanup function must clean up the impl portion completely
(including its allocation, if done separately).
Initializes dt to be the time represented by date_str in format ‘fmt’. Returns AWS_OP_SUCCESS if the
string was successfully parsed, returns AWS_OP_ERR if parsing failed.
Copies the current time as a formatted short date string in local time into output_buf. If buffer is too small, it
will return AWS_OP_ERR. A good size suggestion is AWS_DATE_TIME_STR_MAX_LEN bytes. AWS_DATE_FORMAT_AUTO_DETECT is not
allowed.
Copies the current time as a formatted date string in local time into output_buf. If buffer is too small, it will
return AWS_OP_ERR. A good size suggestion is AWS_DATE_TIME_STR_MAX_LEN bytes. AWS_DATE_FORMAT_AUTO_DETECT is not
allowed.
Copies the current time as a formatted short date string in utc time into output_buf. If buffer is too small, it will
return AWS_OP_ERR. A good size suggestion is AWS_DATE_TIME_STR_MAX_LEN bytes. AWS_DATE_FORMAT_AUTO_DETECT is not
allowed.
Copies the current time as a formatted date string in utc time into output_buf. If buffer is too small, it will
return AWS_OP_ERR. A good size suggestion is AWS_DATE_TIME_STR_MAX_LEN bytes. AWS_DATE_FORMAT_AUTO_DETECT is not
allowed.
Derives an ecc key pair (based on the nist P256 curve) from the access key id and secret access key components
of a set of AWS credentials using an internal key derivation specification. Used to perform sigv4a signing in
the hybrid mode based on AWS credentials.
Warning: The following helpers are intended for use by SDKs that validate correctness of the ruleset at compile time.
The engine sanity checks the provided ruleset, but does not do extensive checking.
Some malformed rulesets might fail in unexpected ways.
Cancels task.
This function must be called from the event loop’s thread, and is only guaranteed
to work properly on tasks scheduled from within the event loop’s thread.
The task will be executed with the AWS_TASK_STATUS_CANCELED status inside this call.
Fetches the next loop for use. The purpose is to enable load balancing across loops. You should not depend on how
this load balancing is done as it is subject to change in the future. Currently it uses the “best-of-two” algorithm
based on the load factor of each loop.
The event loop will schedule the task and run it at the specified time.
Use aws_event_loop_current_clock_time() to query the current time in nanoseconds.
Note that cancelled tasks may execute outside the event loop thread.
This function may be called from outside or inside the event loop thread.
The event loop will schedule the task and run it on the event loop thread as soon as possible.
Note that cancelled tasks may execute outside the event loop thread.
This function may be called from outside or inside the event loop thread.
Variant of aws_event_loop_schedule_task_now that forces all tasks to go through the cross thread task queue,
guaranteeing that order-of-submission is order-of-execution. If you need this guarantee, you must use this
function; the base function contains short-circuiting logic that breaks ordering invariants. Beyond that, all
properties of aws_event_loop_schedule_task_now apply to this function as well.
Get the name of checksum algorithm to be used as the details of the parts were uploaded. Referring to
https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompletedPart.html#AmazonS3-Type-CompletedPart
Completes the hash computation and writes the final digest to output.
Allocation of output is the caller’s responsibility. If you specify
truncate_to to something other than 0, the output will be truncated to that
number of bytes. For example, if you want a SHA256 digest as the first 16
bytes, set truncate_to to 16. If you want the full digest size, just set this
to 0.
Returns an iterator to be used for iterating through a hash table.
Iterator will already point to the first element of the table it finds,
which can be accessed as iter.element.
Deletes the element currently pointed-to by the hash iterator.
After calling this method, the element member of the iterator
should not be accessed until the next call to aws_hash_iter_next.
Convenience hash function which hashes the pointer value directly,
without dereferencing. This can be used in cases where pointer identity
is desired, or where a uintptr_t is encoded into a const void *.
Deletes every element from map and frees all associated memory.
destroy_fn will be called for each element. aws_hash_table_init
must be called before reusing the hash table.
Attempts to locate an element at key. If no such element was found,
creates a new element, with value initialized to NULL. In either case, a
pointer to the element is placed in *p_elem.
Compares two hash tables for equality. Both hash tables must have equivalent
key comparators; values will be compared using the comparator passed into this
function. The key hash function does not need to be equivalent between the
two hash tables.
Attempts to locate an element at key. If the element is found, a
pointer to the value is placed in *p_elem; if it is not found,
*pElem is set to NULL. Either way, AWS_OP_SUCCESS is returned.
Iterates through every element in the map and invokes the callback on
that item. Iteration is performed in an arbitrary, implementation-defined
order, and is not guaranteed to be consistent across invocations.
Initializes a hash map with initial capacity for ‘size’ elements
without resizing. Uses hash_fn to compute the hash of each element.
equals_fn to compute equality of two keys. Whenever an element is
removed without being returned, destroy_key_fn is run on the pointer
to the key and destroy_value_fn is run on the pointer to the value.
Either or both may be NULL if a callback is not desired in this case.
Moves the hash table in ‘from’ to ‘to’. After this move, ‘from’ will
be identical to the state of the original ‘to’ hash table, and ‘to’
will be in the same state as if it had been passed to aws_hash_table_clean_up
(that is, it will have no memory allocated, and it will be safe to
either discard it or call aws_hash_table_clean_up again).
Inserts a new element at key, with the given value. If another element
exists at that key, the old element will be overwritten; both old key and
value objects will be destroyed.
Add a list of headers to be added as trailing headers sent after the last chunk is sent.
a “Trailer” header field which indicates the fields present in the trailer.
Submit a chunk of data to be sent on an HTTP/1.1 stream.
The stream must have specified “chunked” in a “transfer-encoding” header,
and the aws_http_message must NOT have any body stream set.
For client streams, activate() must be called before any chunks are submitted.
For server streams, the response must be submitted before any chunks.
A final chunk with size 0 must be submitted to successfully complete the HTTP-stream.
Get data about the latest GOAWAY frame received from peer (HTTP/2 only).
If no GOAWAY has been received, or the GOAWAY payload is still in transmitting,
AWS_ERROR_HTTP_DATA_NOT_AVAILABLE will be raised.
Get data about the latest GOAWAY frame sent to peer (HTTP/2 only).
If no GOAWAY has been sent, AWS_ERROR_HTTP_DATA_NOT_AVAILABLE will be raised.
Note that GOAWAY frames are typically sent automatically by the connection
during shutdown.
Create an HTTP/2 message from HTTP/1.1 message.
pseudo headers will be created from the context and added to the headers of new message.
Normal headers will be copied to the headers of new message.
Note:
Create a new HTTP/2 request message.
pseudo headers need to be set from aws_http2_headers_set_request_* to the headers of the aws_http_message.
Will be errored out if used in HTTP/1.1 connection.
Create a new HTTP/2 response message.
pseudo headers need to be set from aws_http2_headers_set_response_status to the headers of the aws_http_message.
Will be errored out if used in HTTP/1.1 connection.
Reset the HTTP/2 stream (HTTP/2 only).
Note that if the stream closes before this async call is fully processed, the RST_STREAM frame will not be sent.
The stream must have specified http2_use_manual_data_writes during request creation.
For client streams, activate() must be called before any frames are submitted.
For server streams, the response headers must be submitted before any frames.
A write with options that has end_stream set to be true will end the stream and prevent any further write.
Initialize an empty hash-table that maps struct aws_string * to enum aws_http_version.
This map can used in aws_http_client_connections_options.alpn_string_map.
Begin shutdown sequence of the connection if it hasn’t already started. This will schedule shutdown tasks on the
EventLoop that may send HTTP/TLS/TCP shutdown messages to peers if necessary, and will eventually cause internal
connection memory to stop being accessed and on_shutdown() callback to be called.
Create a stream, with a client connection sending a request.
The request does not start sending automatically once the stream is created. You must call
aws_http_stream_activate to begin execution of the request.
Users must release the connection when they are done with it.
The connection’s memory cannot be reclaimed until this is done.
If the connection was not already shutting down, it will be shut down.
Stop accepting new requests for the connection. It will NOT start the shutdown process for the connection. The
requests that are already open can still wait to be completed, but new requests will fail to be created,
Get all values with this name, combined into one new aws_string that you are responsible for destroying.
If there are multiple headers with this name, their values are appended with comma-separators.
If there are no headers with this name, NULL is returned and AWS_ERROR_HTTP_HEADER_NOT_FOUND is raised.
Get the header at the specified index.
The index of a given header may change any time headers are modified.
When iterating headers, the following ordering rules apply:
Create a new HTTP/1.1 request message.
The message is blank, all properties (method, path, etc) must be set individually.
If HTTP/1.1 message used in HTTP/2 connection, the transformation will be automatically applied.
A HTTP/2 message will created and sent based on the HTTP/1.1 message.
Like aws_http_message_new_request(), but uses existing aws_http_headers instead of creating a new one.
Acquires a hold on the headers, and releases it when the request is destroyed.
Set the body stream.
NULL is an acceptable value for messages with no body.
Note: The message does NOT take ownership of the body stream.
The stream must not be destroyed until the message is complete.
Clones an existing proxy configuration. A refactor could remove this (do a “move” between the old and new user
data in the one spot it’s used) but that should wait until we have better test cases for the logic where this
gets invoked (ntlm/kerberos chains).
Create a persistent proxy configuration from http connection options
@param allocator memory allocator to use
@param options http connection options to source proxy configuration from
@return
Create a persistent proxy configuration from http connection manager options
@param allocator memory allocator to use
@param options http connection manager options to source proxy configuration from
@return
Establish an arbitrary protocol connection through an http proxy via tunneling CONNECT. Alpn is
not required for this connection process to succeed, but we encourage its use if available.
Initializes non-persistent http proxy options from a persistent http proxy configuration
@param options http proxy options to initialize
@param config the http proxy config to use as an initialization source
Creates a new proxy negotiator from a proxy strategy
@param allocator memory allocator to use
@param strategy strategy to creation a new negotiator for
@return a new proxy negotiator if successful, otherwise NULL
A constructor for a proxy strategy that performs basic authentication by adding the appropriate
header and header value to requests or CONNECT requests.
Constructor for an adaptive tunneling proxy strategy. This strategy attempts a vanilla CONNECT and if that
fails it may make followup CONNECT attempts using kerberos or ntlm tokens, based on configuration and proxy
response properties.
Cancel the stream in flight.
For HTTP/1.1 streams, it’s equivalent to closing the connection.
For HTTP/2 streams, it’s equivalent to calling reset on the stream with AWS_HTTP2_ERR_CANCEL.
Gets the HTTP/2 id associated with a stream. Even h1 streams have an id (using the same allocation procedure
as http/2) for easier tracking purposes. For client streams, this will only be non-zero after a successful call
to aws_http_stream_activate()
Create a stream, with a server connection receiving and responding to a request.
This function can only be called from the aws_http_on_incoming_request_fn callback.
aws_http_stream_send_response() should be used to send a response.
Users must release the stream when they are done with it, or its memory will never be cleaned up.
This will not cancel the stream, its callbacks will still fire if the stream is still in progress.
Returns 1 if machine is big endian, 0 if little endian.
If you compile with even -O1 optimization, this check is completely optimized
out at compile time and code which calls “if (aws_is_big_endian())” will do
the right thing without branching.
Like isspace(), but ignores C locale.
Return true if ch has the value of ASCII/UTF-8: space (0x20), form feed (0x0C),
line feed (0x0A), carriage return (0x0D), horizontal tab (0x09), or vertical tab (0x0B).
Checks that a linked list satisfies double linked list connectivity
constraints. This check is O(n) as it traverses the whole linked
list to ensure that tail is reachable from head (and vice versa)
and that every connection is bidirectional.
Checks that the prev of the next pointer of a node points to the
node. As this checks whether the [next] connection of a node is
bidirectional, it returns false if used for the list tail.
Returns a pointer for the last element in the list.
Used to begin iterating the list in reverse. Ex:
for (i = aws_linked_list_rbegin(list); i != aws_linked_list_rend(list); i = aws_linked_list_prev(i)) {…}
Sets the current logging level for the logger. Loggers are not require to support this.
@param logger logger to set the log level for
@param level new log level for the logger
@return AWS_OP_SUCCESS if the level was successfully set, AWS_OP_ERR otherwise
Returns a lookup table for bytes that is the identity transformation with the exception
of uppercase ascii characters getting replaced with lowercase characters. Used in
caseless comparisons.
Computes the md5 hash over input and writes the digest output to ‘output’.
Use this if you don’t need to stream the data you’re hashing and you can load
the entire input to hash into memory.
Returns at least size of memory ready for usage. In versions v0.6.8 and prior, this function was allowed to return
NULL. In later versions, if allocator->mem_acquire() returns NULL, this function will assert and exit. To handle
conditions where OOM is not a fatal error, allocator->mem_acquire() is responsible for finding/reclaiming/running a
GC etc…before returning.
Allocates many chunks of bytes into a single block. Expects to be called with alternating void ** (dest), size_t
(size). The first void ** will be set to the root of the allocation. Alignment is assumed to be sizeof(intmax_t).
Allocates a block of memory for an array of num elements, each of them size bytes long, and initializes all its bits
to zero. In versions v0.6.8 and prior, this function was allowed to return NULL.
In later versions, if allocator->mem_calloc() returns NULL, this function will assert and exit. To handle
conditions where OOM is not a fatal error, allocator->mem_calloc() is responsible for finding/reclaiming/running a
GC etc…before returning.
Attempts to adjust the size of the pointed-to memory buffer from oldsize to
newsize. The pointer (*ptr) may be changed if the memory needs to be
reallocated.
Blocks until it acquires the lock. While on some platforms such as Windows,
this may behave as a reentrant mutex, you should not treat it like one. On
platforms it is possible for it to be non-reentrant, it will be.
Attempts to acquire the lock but returns immediately if it can not.
While on some platforms such as Windows, this may behave as a reentrant mutex,
you should not treat it like one. On platforms it is possible for it to be non-reentrant, it will be.
Note: For windows, minimum support server version is Windows Server 2008 R2 [desktop apps | UWP apps]
Checks that the backpointers of the priority queue are either NULL
or correctly allocated to point at aws_priority_queue_nodes. This
check is O(n), as it accesses every backpointer in a loop, and thus
shouldn’t be used carelessly.
Initializes a priority queue struct for use. This mode will grow memory automatically (exponential model)
Default size is the inital size of the queue
item_size is the size of each element in bytes. Mixing items types is not supported by this API.
pred is the function that will be used to determine priority.
Initializes a priority queue struct for use. This mode will not allocate any additional memory. When the heap fills
new enqueue operations will fail with AWS_ERROR_PRIORITY_QUEUE_FULL.
Copies the element of the highest priority, and removes it from the queue.. Complexity: O(log(n)).
If queue is empty, AWS_ERROR_PRIORITY_QUEUE_EMPTY will be raised.
Removes a specific node from the priority queue. Complexity: O(log(n))
After removing a node (using either _remove or _pop), the backpointer set at push_ref time is set
to a sentinel value. If this sentinel value is passed to aws_priority_queue_remove,
AWS_ERROR_PRIORITY_QUEUE_BAD_NODE will be raised. Note, however, that passing uninitialized
aws_priority_queue_nodes, or ones from different priority queues, results in undefined behavior.
For iterating over the params in the query string.
param is an in/out argument used to track progress, it MUST be zeroed out to start.
If true is returned, param contains the value of the next param.
If false is returned, there are no further params.
Parses query string and stores the parameters in ‘out_params’. Returns AWS_OP_SUCCESS on success and
AWS_OP_ERR on failure. The user is responsible for initializing out_params with item size of struct aws_query_param.
The user is also responsible for cleaning up out_params when finished.
Decrements a ref-counter’s ref count. Invokes the on_zero callback if the ref count drops to zero
@param ref_count ref-counter to decrement the count for
@return the value of the decremented ref count
TODO: this needs to be a private function (wait till we have the cmake story
better before moving it though). It should be external for the purpose of
other libs we own, but customers should not be able to hit it without going
out of their way to do so.
Attempts to acquire a retry token for use with retries. On success, on_acquired will be invoked when a token is
available, or an error will be returned if the timeout expires. partition_id identifies operations that should be
grouped together. This allows for more sophisticated strategies such as AIMD and circuit breaker patterns. Pass NULL
to use the global partition.
Creates a retry strategy using exponential backoff. This strategy does not perform any bookkeeping on error types and
success. There is no circuit breaker functionality in here. See the comments above for
aws_exponential_backoff_retry_options.
This retry strategy is used to disable retries. Passed config can be null.
Calling aws_retry_strategy_acquire_retry_token will raise error AWS_IO_RETRY_PERMISSION_DENIED.
Calling any function apart from the aws_retry_strategy_acquire_retry_token and aws_retry_strategy_release will
result in a fatal error.
This is a retry implementation that cuts off traffic if it’s
detected that an endpoint partition is having availability
problems. This is necessary to keep from making outages worse
by scheduling work that’s unlikely to succeed yet increases
load on an already ailing system.
Schedules a retry based on the backoff and token based strategies. retry_ready is invoked when the retry is either
ready for execution or if it has been canceled due to application shutdown.
Records a successful retry. This is used for making future decisions to open up token buckets, AIMD breakers etc…
some strategies such as exponential backoff will ignore this, but you should always call it after a successful
operation or your system will never recover during an outage.
Releases the reference count for token. This should always be invoked after either calling
aws_retry_strategy_schedule_retry() and failing, or after calling aws_retry_token_record_success().
Optimize the buffer pool for allocations of a specific size.
Creates a separate list of blocks dedicated to this size for better memory efficiency.
Allocations of exactly this size will use these special blocks instead of the regular primary/secondary storage.
Align a range size to the buffer pool’s allocation strategy.
This function determines the optimal aligned size based on the buffer pool’s configuration.
For sizes within the primary allocation range, it aligns to chunk boundaries.
For larger sizes that go to secondary storage, it returns the size as-is.
Add a reference, keeping this object alive.
The reference must be released when you are done with it, or it’s memory will never be cleaned up.
You must not pass in NULL.
Always returns the same pointer that was passed in.
Release a reference.
When the reference count drops to 0, this object will be cleaned up.
It’s OK to pass in NULL (nothing happens).
Always returns NULL.
Creates a new S3 endpoint resolver.
Warning: Before using this header, you have to enable it by
setting cmake config AWS_ENABLE_S3_ENDPOINT_RESOLVER=ON
Add a reference, keeping this object alive.
The reference must be released when you are done with it, or it’s memory will never be cleaned up.
You must not pass in NULL.
Always returns the same pointer that was passed in.
Note: pause is currently only supported on upload requests.
In order to pause an ongoing upload, call aws_s3_meta_request_pause() that
will return resume token. Token can be used to query the state of operation
at the pausing time.
To resume an upload that was paused, supply resume token in the meta
request options structure member aws_s3_meta_request_options.resume_token.
The upload can be resumed either from the same client or a different one.
Corner cases for resume upload are as follows:
Release a reference.
When the reference count drops to 0, this object will be cleaned up.
It’s OK to pass in NULL (nothing happens).
Always returns NULL.
Add a reference, keeping this object alive.
The reference must be released when you are done with it, or it’s memory will never be cleaned up.
Always returns the same pointer that was passed in.
Get the extended request ID from aws_s3_request_metrics.
If unavailable, AWS_ERROR_S3_METRIC_DATA_NOT_AVAILABLE will be raised.
If available, out_extended_request_id will be set to a string. Be warned this string’s lifetime is tied to the
metrics object.
Get the host_address of the request.
If unavailable, AWS_ERROR_S3_METRIC_DATA_NOT_AVAILABLE will be raised.
If available, out_host_address will be set to a string. Be warned this string’s lifetime is tied to the metrics
object.
Get the IP address of the request connected to.
If unavailable, AWS_ERROR_S3_METRIC_DATA_NOT_AVAILABLE will be raised.
If available, out_ip_address will be set to a string. Be warned this string’s lifetime is tied to the metrics object.
Get the S3 operation name of the request (e.g. “HeadObject”).
If unavailable, AWS_ERROR_S3_METRIC_DATA_NOT_AVAILABLE will be raised.
If available, out_operation_name will be set to a string.
Be warned this string’s lifetime is tied to the metrics object.
Getters for s3 request metrics **********************************************/
/
Get the request ID from aws_s3_request_metrics.
If unavailable, AWS_ERROR_S3_METRIC_DATA_NOT_AVAILABLE will be raised.
If available, out_request_id will be set to a string. Be warned this string’s lifetime is tied to the metrics
object.
Get the path and query of the request.
If unavailable, AWS_ERROR_S3_METRIC_DATA_NOT_AVAILABLE will be raised.
If available, out_request_path_query will be set to a string. Be warned this string’s lifetime is tied to the metrics
object.
Release a reference.
When the reference count drops to 0, this object will be cleaned up.
It’s OK to pass in NULL (nothing happens).
Always returns NULL.
Return operation name for aws_s3_request_type,
or empty string if the type doesn’t map to an actual operation.
For example:
AWS_S3_REQUEST_TYPE_HEAD_OBJECT -> “HeadObject”
AWS_S3_REQUEST_TYPE_UNKNOWN -> “”
AWS_S3_REQUEST_TYPE_MAX -> “”
Computes the length of a c string in bytes assuming the character set is either ASCII or UTF-8. If no NULL character
is found within max_read_len of str, AWS_ERROR_C_STRING_BUFFER_NOT_NULL_TERMINATED is raised. Otherwise, str_len
will contain the string length minus the NULL character, and AWS_OP_SUCCESS will be returned.
Shuts down ‘listener’ and cleans up any resources associated with it. Any incoming channels on listener will still
be active. destroy_callback will be invoked after the server socket listener is destroyed, and all associated
connections and channels have finished shutting down.
Sets up a server socket listener. If you are planning on using TLS, use
aws_server_bootstrap_new_tls_socket_listener instead. This creates a socket listener bound to local_endpoint
using socket options options. incoming_callback will be invoked once an incoming channel is ready for use or if
an error is encountered. shutdown_callback will be invoked once the channel has shutdown. destroy_callback will
be invoked after the server socket listener is destroyed, and all associated connections and channels have finished
shutting down. Immediately after the shutdown_callback returns, the channel is cleaned up automatically. All
callbacks are invoked the thread of the event-loop that the listening socket is assigned to
Set the implementation of md5 to use. If you compiled without BYO_CRYPTO,
you do not need to call this. However, if use this, we will honor it,
regardless of compile options. This may be useful for testing purposes. If
you did set BYO_CRYPTO, and you do not call this function you will
segfault.
Set the implementation of sha1 to use. If you compiled without
BYO_CRYPTO, you do not need to call this. However, if use this, we will
honor it, regardless of compile options. This may be useful for testing
purposes. If you did set BYO_CRYPTO, and you do not call this function
you will segfault.
Set the implementation of sha256 to use. If you compiled without
BYO_CRYPTO, you do not need to call this. However, if use this, we will
honor it, regardless of compile options. This may be useful for testing
purposes. If you did set BYO_CRYPTO, and you do not call this function
you will segfault.
Set the implementation of sha512 to use. If you compiled without
BYO_CRYPTO, you do not need to call this. However, if use this, we will
honor it, regardless of compile options. This may be useful for testing
purposes. If you did set BYO_CRYPTO, and you do not call this function
you will segfault.
Computes the sha1 hash over input and writes the digest output to ‘output’.
Use this if you don’t need to stream the data you’re hashing and you can load
the entire input to hash into memory. If you specify truncate_to to something
other than 0, the output will be truncated to that number of bytes. For
example, if you want a SHA1 digest as the first 16 bytes, set truncate_to
to 16. If you want the full digest size, just set this to 0.
Computes the sha256 hash over input and writes the digest output to ‘output’.
Use this if you don’t need to stream the data you’re hashing and you can load
the entire input to hash into memory. If you specify truncate_to to something
other than 0, the output will be truncated to that number of bytes. For
example, if you want a SHA256 digest as the first 16 bytes, set truncate_to
to 16. If you want the full digest size, just set this to 0.
Computes the sha512 hash over input and writes the digest output to ‘output’.
Use this if you don’t need to stream the data you’re hashing and you can load
the entire input to hash into memory. If you specify truncate_to to something
other than 0, the output will be truncated to that number of bytes. For
example, if you want a SHA512 digest as the first 16 bytes, set truncate_to
to 16. If you want the full digest size, just set this to 0.
(Asynchronous) entry point to sign something (a request, a chunk, an event) with an AWS signing process.
Depending on the configuration, the signing process may or may not complete synchronously.
Compares lexicographical ordering of two strings. This is a binary
byte-by-byte comparison, treating bytes as unsigned integers. It is suitable
for either textual or binary data and is unaware of unicode or any other byte
encoding. If both strings are identical in the bytes of the shorter string,
then the longer string is lexicographically after the shorter.
Evaluates the set of properties that define the shape of all valid aws_string structures.
It is also a cheap check, in the sense it run in constant time (i.e., no loops or recursion).
Converts a c-string constant to a log level value. Uses case-insensitive comparison
and simply iterates all possibilities until a match or nothing remains. If no match
is found, AWS_OP_ERR is returned.
Empties and executes all queued tasks, passing the AWS_TASK_STATUS_CANCELED status to the task function.
Cleans up any memory allocated, and prepares the instance for reuse or deletion.
Returns whether the scheduler has any scheduled tasks.
next_task_time (optional) will be set to time of the next task, note that 0 will be set if tasks were
added via aws_task_scheduler_schedule_now() and UINT64_MAX will be set if no tasks are scheduled at all.
Sequentially execute all tasks scheduled to run at, or before current_time.
AWS_TASK_STATUS_RUN_READY will be passed to the task function as the task status.
Adds a callback to the chain to be called when the current thread joins.
Callbacks are called from the current thread, in the reverse order they
were added, after the thread function returns.
If not called from within an aws_thread, has no effect.
Gets name of the current thread.
Caller is responsible for destroying returned string.
If thread does not have a name, AWS_OP_SUCCESS is returned and out_name is
set to NULL.
If underlying OS call fails, AWS_ERROR_SYS_CALL_FAILURE will be raised
If OS does not support getting thread name, AWS_ERROR_PLATFORM_NOT_SUPPORTED
will be raised
Decrements the count of unjoined threads in the managed thread system. Used by managed threads and
event loop threads. Additional usage requires the user to join corresponding threads themselves and
correctly increment/decrement even in the face of launch/join errors.
Converts an aws_thread_id_t to a c-string. For portability, aws_thread_id_t
must not be printed directly. Intended primarily to support building log
lines that include the thread id in them. The parameter buffer must
point-to a char buffer of length bufsz == AWS_THREAD_ID_T_REPR_BUFSZ. The
thread id representation is returned in buffer.
Increments the count of unjoined threads in the managed thread system. Used by managed threads and
event loop threads. Additional usage requires the user to join corresponding threads themselves and
correctly increment/decrement even in the face of launch/join errors.
Creates an OS level thread and associates it with func. context will be passed to func when it is executed.
options will be applied to the thread if they are applicable for the platform.
Gets name of the thread.
Caller is responsible for destroying returned string.
If thread does not have a name, AWS_OP_SUCCESS is returned and out_name is
set to NULL.
If underlying OS call fails, AWS_ERROR_SYS_CALL_FAILURE will be raised
If OS does not support getting thread name, AWS_ERROR_PLATFORM_NOT_SUPPORTED
will be raised
Overrides how long, in nanoseconds, that aws_thread_join_all_managed will wait for threads to complete.
A value of zero will result in an unbounded wait.
Convert a c library io error into an aws error, and raise it.
If no conversion is found, AWS_ERROR_SYS_CALL_FAILURE is raised.
Always returns AWS_OP_ERR.
Removes any padding added to the end of a sigv4a signature. Signature must be hex-encoded.
@param signature signature to remove padding from
@return cursor that ranges over only the valid hex encoding of the sigv4a signature
Initializes uri to values specified in options. Returns AWS_OP_SUCCESS, on success, AWS_OP_ERR on failure.
After calling this function, the parts can be accessed.
Parses ‘uri_str’ and initializes uri. Returns AWS_OP_SUCCESS, on success, AWS_OP_ERR on failure.
After calling this function, the parts can be accessed.
Returns the port portion of the authority if it was present, otherwise, returns 0.
If this is 0, it is the users job to determine the correct port based on scheme and protocol.
For iterating over the params in the uri query string.
param is an in/out argument used to track progress, it MUST be zeroed out to start.
If true is returned, param contains the value of the next param.
If false is returned, there are no further params.
Parses query string and stores the parameters in ‘out_params’. Returns AWS_OP_SUCCESS on success and
AWS_OP_ERR on failure. The user is responsible for initializing out_params with item size of struct aws_query_param.
The user is also responsible for cleaning up out_params when finished.
Returns the scheme portion of the uri (e.g. http, https, ftp, ftps, etc…). If the scheme was not present
in the uri, the returned value will be empty. It is the users job to determine the appropriate defaults
if this field is empty, based on protocol, port, etc…
If ALPN is being used this function will be invoked by the channel once an ALPN message is received. The returned
channel_handler will be added to, and managed by, the channel.
Invoked when the HTTP/2 settings change is complete.
If connection setup successfully this will always be invoked whether settings change successfully or unsuccessfully.
If error_code is AWS_ERROR_SUCCESS (0), then the peer has acknowledged the settings and the change has been applied.
If error_code is non-zero, then a connection error occurred before the settings could be fully acknowledged and
applied. This is always invoked on the connection’s event-loop thread.
Invoked when an HTTP/2 GOAWAY frame is received from peer.
Implies that the peer has initiated shutdown, or encountered a serious error.
Once a GOAWAY is received, no further streams may be created on this connection.
Invoked when the HTTP/2 PING completes, whether peer has acknowledged it or not.
If error_code is AWS_ERROR_SUCCESS (0), then the peer has acknowledged the PING and round_trip_time_ns will be the
round trip time in nano seconds for the connection.
If error_code is non-zero, then a connection error occurred before the PING get acknowledgment and round_trip_time_ns
will be useless in this case.
Invoked when new HTTP/2 settings from peer have been applied.
Settings_array is the array of aws_http2_settings that contains all the settings we just changed in the order we
applied (the order settings arrived). Num_settings is the number of elements in that array.
Function to invoke when a message transformation completes.
This function MUST be invoked or the application will soft-lock.
message and complete_ctx must be the same pointers provided to the aws_http_message_transform_fn.
error_code should should be AWS_ERROR_SUCCESS if transformation was successful,
otherwise pass a different AWS_ERROR_X value.
A function that may modify a request or response before it is sent.
The transformation may be asynchronous or immediate.
The user MUST invoke the complete_fn when transformation is complete or the application will soft-lock.
When invoking the complete_fn, pass along the message and complete_ctx provided here and an error code.
The error code should be AWS_ERROR_SUCCESS if transformation was successful,
otherwise pass a different AWS_ERROR_X value.
Invoked when the connection has finished shutting down.
Never invoked if on_setup failed.
This is always invoked on connection’s event-loop thread.
Note that the connection is not completely done until on_shutdown has been invoked
AND aws_http_connection_release() has been called.
Called repeatedly as body data is received.
The data must be copied immediately if you wish to preserve it.
This is always invoked on the HTTP connection’s event-loop thread.
Invoked when the incoming header block of this type(informational/main/trailing) has been completely read.
This is always invoked on the HTTP connection’s event-loop thread.
Invoked repeatedly times as headers are received.
At this point, aws_http_stream_get_incoming_response_status() can be called for the client.
And aws_http_stream_get_incoming_request_method() and aws_http_stream_get_incoming_request_uri() can be called for
the server.
This is always invoked on the HTTP connection’s event-loop thread.
Invoked when a request/response stream is complete, whether successful or unsuccessful
This is always invoked on the HTTP connection’s event-loop thread.
This will not be invoked if the stream is never activated.
Invoked when request/response stream destroy completely.
This can be invoked within the same thead who release the refcount on http stream.
This is invoked even if the stream is never activated.
Invoked right before request/response stream is complete to report the tracing metrics for aws_http_stream.
This may be invoked synchronously when aws_http_stream_release() is called.
This is invoked even if the stream is never activated.
See aws_http_stream_metrics for details.
Tunneling proxy connections only. A callback that lets the negotiator examine the headers in the
response to the most recent CONNECT request as they arrive.
Synchronous (for now) callback function to fetch a token used in modifying CONNECT request. Includes a (byte string)
context intended to be used as part of a challenge-response flow.
User-supplied transform callback which implements the proxy request flow and ultimately, across all execution
pathways, invokes either the terminate function or the forward function appropriately.
Invoked when the data stream of an outgoing HTTP write operation is no longer in use.
This is always invoked on the HTTP connection’s event-loop thread.
Invoked once an address has been resolved for host. The type in host_addresses is struct aws_host_address (by-value).
The caller does not own this memory and you must copy the host address before returning from this function if you
plan to use it later. For convenience, we’ve provided the aws_host_address_copy() and aws_host_address_clean_up()
functions.
Function signature for configuring your own resolver (the default just uses getaddrinfo()). The type in
output_addresses is struct aws_host_address (by-value). We assume this function blocks, hence this absurdly
complicated design.
Invoked after a successful call to aws_retry_strategy_schedule_retry(). This function will always be invoked if and
only if aws_retry_strategy_schedule_retry() returns AWS_OP_SUCCESS. It will never be invoked synchronously from
aws_retry_strategy_schedule_retry(). After attempting the operation, either call aws_retry_strategy_schedule_retry()
with an aws_retry_error_type or call aws_retry_token_record_success() and then release the token via.
aws_retry_token_release().
Invoked upon the acquisition, or failure to acquire a retry token. This function will always be invoked if and only
if aws_retry_strategy_acquire_retry_token() returns AWS_OP_SUCCESS. It will never be invoked synchronously from
aws_retry_strategy_acquire_retry_token(). Token will always be NULL if error_code is non-zero, and vice-versa. If
token is non-null, it will have a reference count of 1, and you must call aws_retry_token_release() on it later. See
the comments for aws_retry_strategy_on_retry_ready_fn for more info.
Factory to construct the pool for the given config. Passes along buffer related info configured on the client, which
factory may ignore when considering how to construct pool.
This implementation should fail if pool cannot be constructed for some reason (ex. if config params cannot be met),
by logging failure reason, returning null and raising aws_error.
Optional callback, for you to provide the full object checksum after the object was read.
Client will NOT check the checksum provided before sending it to the server.
Invoked to report progress of a meta-request.
For PutObject, progress refers to bytes uploaded.
For CopyObject, progress refers to bytes copied.
For GetObject, progress refers to bytes downloaded.
For anything else, progress refers to response body bytes received.
Invoked to report the telemetry of the meta request once a single request finishes.
Note: *metrics is only valid for the duration of the callback. If you need to keep it around, use
aws_s3_request_metrics_acquire
Optional callback, for you to review an upload before it completes.
For example, you can review each part’s checksum and fail the upload if
you do not agree with them.
The factory function for S3 client to create a S3 Express credentials provider.
The S3 client will be the only owner of the S3 Express credentials provider.
If TLS is being used, this function is called once the socket has received an incoming connection, the channel has
been initialized, and TLS has been successfully negotiated. A TLS handler has already been added to the channel. If
TLS negotiation fails, this function will be called with the corresponding error code.
This function is only used for async listener (Apple Network Framework in this case).
Once the server listener socket is finished setup and starting listening, this fuction
will be invoked.