miku_h2/client.rs
1//! Client implementation of the HTTP/2 protocol.
2//!
3//! # Getting started
4//!
5//! Running an HTTP/2 client requires the caller to establish the underlying
6//! connection as well as get the connection to a state that is ready to begin
7//! the HTTP/2 handshake. See [here](../index.html#handshake) for more
8//! details.
9//!
10//! This could be as basic as using Tokio's [`TcpStream`] to connect to a remote
11//! host, but usually it means using either ALPN or HTTP/1.1 protocol upgrades.
12//!
13//! Once a connection is obtained, it is passed to [`handshake`], which will
14//! begin the [HTTP/2 handshake]. This returns a future that completes once
15//! the handshake process is performed and HTTP/2 streams may be initialized.
16//!
17//! [`handshake`] uses default configuration values. There are a number of
18//! settings that can be changed by using [`Builder`] instead.
19//!
20//! Once the handshake future completes, the caller is provided with a
21//! [`Connection`] instance and a [`SendRequest`] instance. The [`Connection`]
22//! instance is used to drive the connection (see [Managing the connection]).
23//! The [`SendRequest`] instance is used to initialize new streams (see [Making
24//! requests]).
25//!
26//! # Making requests
27//!
28//! Requests are made using the [`SendRequest`] handle provided by the handshake
29//! future. Once a request is submitted, an HTTP/2 stream is initialized and
30//! the request is sent to the server.
31//!
32//! A request body and request trailers are sent using [`SendRequest`] and the
33//! server's response is returned once the [`ResponseFuture`] future completes.
34//! Both the [`SendStream`] and [`ResponseFuture`] instances are returned by
35//! [`SendRequest::send_request`] and are tied to the HTTP/2 stream
36//! initialized by the sent request.
37//!
38//! The [`SendRequest::poll_ready`] function returns `Ready` when a new HTTP/2
39//! stream can be created, i.e. as long as the current number of active streams
40//! is below [`MAX_CONCURRENT_STREAMS`]. If a new stream cannot be created, the
41//! caller will be notified once an existing stream closes, freeing capacity for
42//! the caller. The caller should use [`SendRequest::poll_ready`] to check for
43//! capacity before sending a request to the server.
44//!
45//! [`SendRequest`] enforces the [`MAX_CONCURRENT_STREAMS`] setting. The user
46//! must not send a request if `poll_ready` does not return `Ready`. Attempting
47//! to do so will result in an [`Error`] being returned.
48//!
49//! # Managing the connection
50//!
51//! The [`Connection`] instance is used to manage connection state. The caller
52//! is required to call [`Connection::poll`] in order to advance state.
53//! [`SendRequest::send_request`] and other functions have no effect unless
54//! [`Connection::poll`] is called.
55//!
56//! The [`Connection`] instance should only be dropped once [`Connection::poll`]
57//! returns `Ready`. At this point, the underlying socket has been closed and no
58//! further work needs to be done.
59//!
60//! The easiest way to ensure that the [`Connection`] instance gets polled is to
61//! submit the [`Connection`] instance to an [executor]. The executor will then
62//! manage polling the connection until the connection is complete.
63//! Alternatively, the caller can call `poll` manually.
64//!
65//! # Example
66//!
67//! ```rust, no_run
68//!
69//! use miku_h2::client;
70//!
71//! use http::{Request, Method};
72//! use std::error::Error;
73//! use tokio::net::TcpStream;
74//!
75//! #[tokio::main]
76//! pub async fn main() -> Result<(), Box<dyn Error>> {
77//! // Establish TCP connection to the server.
78//! let tcp = TcpStream::connect("127.0.0.1:5928").await?;
79//! let (h2, connection) = client::handshake(tcp).await?;
80//! tokio::spawn(async move {
81//! connection.await.unwrap();
82//! });
83//!
84//! let mut h2 = h2.ready().await?;
85//! // Prepare the HTTP request to send to the server.
86//! let request = Request::builder()
87//! .method(Method::GET)
88//! .uri("https://www.example.com/")
89//! .body(())
90//! .unwrap();
91//!
92//! // Send the request. The second tuple item allows the caller
93//! // to stream a request body.
94//! let (response, _) = h2.send_request(request, true).unwrap();
95//!
96//! let (head, mut body) = response.await?.into_parts();
97//!
98//! println!("Received response: {:?}", head);
99//!
100//! // The `flow_control` handle allows the caller to manage
101//! // flow control.
102//! //
103//! // Whenever data is received, the caller is responsible for
104//! // releasing capacity back to the server once it has freed
105//! // the data from memory.
106//! let mut flow_control = body.flow_control().clone();
107//!
108//! while let Some(chunk) = body.data().await {
109//! let chunk = chunk?;
110//! println!("RX: {:?}", chunk);
111//!
112//! // Let the server send more data.
113//! let _ = flow_control.release_capacity(chunk.len());
114//! }
115//!
116//! Ok(())
117//! }
118//! ```
119//!
120//! [`TcpStream`]: https://docs.rs/tokio-core/0.1/tokio_core/net/struct.TcpStream.html
121//! [`handshake`]: fn.handshake.html
122//! [executor]: https://docs.rs/futures/0.1/futures/future/trait.Executor.html
123//! [`SendRequest`]: struct.SendRequest.html
124//! [`SendStream`]: ../struct.SendStream.html
125//! [Making requests]: #making-requests
126//! [Managing the connection]: #managing-the-connection
127//! [`Connection`]: struct.Connection.html
128//! [`Connection::poll`]: struct.Connection.html#method.poll
129//! [`SendRequest::send_request`]: struct.SendRequest.html#method.send_request
130//! [`MAX_CONCURRENT_STREAMS`]: http://httpwg.org/specs/rfc7540.html#SettingValues
131//! [`SendRequest`]: struct.SendRequest.html
132//! [`ResponseFuture`]: struct.ResponseFuture.html
133//! [`SendRequest::poll_ready`]: struct.SendRequest.html#method.poll_ready
134//! [HTTP/2 handshake]: http://httpwg.org/specs/rfc7540.html#ConnectionHeader
135//! [`Builder`]: struct.Builder.html
136//! [`Error`]: ../struct.Error.html
137
138use crate::codec::{Codec, SendError, UserError};
139use crate::ext::{Protocol, PseudoType};
140use crate::frame::{Headers, Priority, Pseudo, Reason, Settings, StreamDependency, StreamId};
141use crate::proto::{self, Error};
142use crate::{FlowControl, PingPong, RecvStream, SendStream};
143
144use bytes::{Buf, Bytes};
145use http::{uri, HeaderMap, Method, Request, Response, Version};
146use std::fmt;
147use std::future::Future;
148use std::pin::Pin;
149use std::task::{Context, Poll};
150use std::time::Duration;
151use tokio::io::{AsyncRead, AsyncWrite, AsyncWriteExt};
152use tracing::Instrument;
153
154/// Initializes new HTTP/2 streams on a connection by sending a request.
155///
156/// This type does no work itself. Instead, it is a handle to the inner
157/// connection state held by [`Connection`]. If the associated connection
158/// instance is dropped, all `SendRequest` functions will return [`Error`].
159///
160/// [`SendRequest`] instances are able to move to and operate on separate tasks
161/// / threads than their associated [`Connection`] instance. Internally, there
162/// is a buffer used to stage requests before they get written to the
163/// connection. There is no guarantee that requests get written to the
164/// connection in FIFO order as HTTP/2 prioritization logic can play a role.
165///
166/// [`SendRequest`] implements [`Clone`], enabling the creation of many
167/// instances that are backed by a single connection.
168///
169/// See [module] level documentation for more details.
170///
171/// [module]: index.html
172/// [`Connection`]: struct.Connection.html
173/// [`Clone`]: https://doc.rust-lang.org/std/clone/trait.Clone.html
174/// [`Error`]: ../struct.Error.html
175pub struct SendRequest<B: Buf> {
176 inner: proto::Streams<B, Peer>,
177 pending: Option<proto::OpaqueStreamRef>,
178}
179
180/// Returns a `SendRequest` instance once it is ready to send at least one
181/// request.
182#[derive(Debug)]
183pub struct ReadySendRequest<B: Buf> {
184 inner: Option<SendRequest<B>>,
185}
186
187/// Manages all state associated with an HTTP/2 client connection.
188///
189/// A `Connection` is backed by an I/O resource (usually a TCP socket) and
190/// implements the HTTP/2 client logic for that connection. It is responsible
191/// for driving the internal state forward, performing the work requested of the
192/// associated handles ([`SendRequest`], [`ResponseFuture`], [`SendStream`],
193/// [`RecvStream`]).
194///
195/// `Connection` values are created by calling [`handshake`]. Once a
196/// `Connection` value is obtained, the caller must repeatedly call [`poll`]
197/// until `Ready` is returned. The easiest way to do this is to submit the
198/// `Connection` instance to an [executor].
199///
200/// [module]: index.html
201/// [`handshake`]: fn.handshake.html
202/// [`SendRequest`]: struct.SendRequest.html
203/// [`ResponseFuture`]: struct.ResponseFuture.html
204/// [`SendStream`]: ../struct.SendStream.html
205/// [`RecvStream`]: ../struct.RecvStream.html
206/// [`poll`]: #method.poll
207/// [executor]: https://docs.rs/futures/0.1/futures/future/trait.Executor.html
208///
209/// # Examples
210///
211/// ```
212/// # use tokio::io::{AsyncRead, AsyncWrite};
213/// # use miku_h2::client;
214/// # use miku_h2::client::*;
215/// #
216/// # async fn doc<T>(my_io: T) -> Result<(), miku_h2::Error>
217/// # where T: AsyncRead + AsyncWrite + Send + Unpin + 'static,
218/// # {
219/// let (send_request, connection) = client::handshake(my_io).await?;
220/// // Submit the connection handle to an executor.
221/// tokio::spawn(async { connection.await.expect("connection failed"); });
222///
223/// // Now, use `send_request` to initialize HTTP/2 streams.
224/// // ...
225/// # Ok(())
226/// # }
227/// #
228/// # pub fn main() {}
229/// ```
230#[must_use = "futures do nothing unless polled"]
231pub struct Connection<T, B: Buf = Bytes> {
232 inner: proto::Connection<T, Peer, B>,
233}
234
235/// A future of an HTTP response.
236#[derive(Debug)]
237#[must_use = "futures do nothing unless polled"]
238pub struct ResponseFuture {
239 inner: proto::OpaqueStreamRef,
240 push_promise_consumed: bool,
241}
242
243/// A future of a pushed HTTP response.
244///
245/// We have to differentiate between pushed and non pushed because of the spec
246/// <https://httpwg.org/specs/rfc7540.html#PUSH_PROMISE>
247/// > PUSH_PROMISE frames MUST only be sent on a peer-initiated stream
248/// > that is in either the "open" or "half-closed (remote)" state.
249#[derive(Debug)]
250#[must_use = "futures do nothing unless polled"]
251pub struct PushedResponseFuture {
252 inner: ResponseFuture,
253}
254
255/// A pushed response and corresponding request headers
256#[derive(Debug)]
257pub struct PushPromise {
258 /// The request headers
259 request: Request<()>,
260
261 /// The pushed response
262 response: PushedResponseFuture,
263}
264
265/// A stream of pushed responses and corresponding promised requests
266#[derive(Debug)]
267#[must_use = "streams do nothing unless polled"]
268pub struct PushPromises {
269 inner: proto::OpaqueStreamRef,
270}
271
272/// Builds client connections with custom configuration values.
273///
274/// Methods can be chained in order to set the configuration values.
275///
276/// The client is constructed by calling [`handshake`] and passing the I/O
277/// handle that will back the HTTP/2 server.
278///
279/// New instances of `Builder` are obtained via [`Builder::new`].
280///
281/// See function level documentation for details on the various client
282/// configuration settings.
283///
284/// [`Builder::new`]: struct.Builder.html#method.new
285/// [`handshake`]: struct.Builder.html#method.handshake
286///
287/// # Examples
288///
289/// ```
290/// # use tokio::io::{AsyncRead, AsyncWrite};
291/// # use miku_h2::client::*;
292/// # use bytes::Bytes;
293/// #
294/// # async fn doc<T: AsyncRead + AsyncWrite + Unpin>(my_io: T)
295/// -> Result<((SendRequest<Bytes>, Connection<T, Bytes>)), miku_h2::Error>
296/// # {
297/// // `client_fut` is a future representing the completion of the HTTP/2
298/// // handshake.
299/// let client_fut = Builder::new()
300/// .initial_window_size(1_000_000)
301/// .max_concurrent_streams(1000)
302/// .handshake(my_io);
303/// # client_fut.await
304/// # }
305/// #
306/// # pub fn main() {}
307/// ```
308#[derive(Clone, Debug)]
309pub struct Builder {
310 /// Time to keep locally reset streams around before reaping.
311 reset_stream_duration: Duration,
312
313 /// Initial maximum number of locally initiated (send) streams.
314 /// After receiving a SETTINGS frame from the remote peer,
315 /// the connection will overwrite this value with the
316 /// MAX_CONCURRENT_STREAMS specified in the frame.
317 /// If no value is advertised by the remote peer in the initial SETTINGS
318 /// frame, it will be set to usize::MAX.
319 initial_max_send_streams: usize,
320
321 /// Initial target window size for new connections.
322 initial_target_connection_window_size: Option<u32>,
323
324 /// Maximum amount of bytes to "buffer" for writing per stream.
325 max_send_buffer_size: usize,
326
327 /// Maximum number of locally reset streams to keep at a time.
328 reset_stream_max: usize,
329
330 /// Maximum number of remotely reset streams to allow in the pending
331 /// accept queue.
332 pending_accept_reset_stream_max: usize,
333
334 /// Initial `Settings` frame to send as part of the handshake.
335 settings: Settings,
336
337 /// The `Headers` frame pseudo order.
338 headers_frame_pseudo_order: Option<&'static [PseudoType; 4]>,
339
340 /// The `Headers` frame priority setting.
341 headers_frame_priority: Option<StreamDependency>,
342
343 /// The `Priority` frames (settings) for virtual streams.
344 virtual_streams_priorities: Option<&'static [Priority]>,
345
346 /// The stream ID of the first (lowest) stream. Subsequent streams will use
347 /// monotonically increasing stream IDs.
348 stream_id: StreamId,
349
350 /// Maximum number of locally reset streams due to protocol error across
351 /// the lifetime of the connection.
352 ///
353 /// When this gets exceeded, we issue GOAWAYs.
354 local_max_error_reset_streams: Option<usize>,
355}
356
357#[derive(Debug)]
358pub(crate) struct Peer;
359
360// ===== impl SendRequest =====
361
362impl<B> SendRequest<B>
363where
364 B: Buf,
365{
366 /// Returns `Ready` when the connection can initialize a new HTTP/2
367 /// stream.
368 ///
369 /// This function must return `Ready` before `send_request` is called. When
370 /// `Poll::Pending` is returned, the task will be notified once the readiness
371 /// state changes.
372 ///
373 /// See [module] level docs for more details.
374 ///
375 /// [module]: index.html
376 pub fn poll_ready(&mut self, cx: &mut Context) -> Poll<Result<(), crate::Error>> {
377 ready!(self.inner.poll_pending_open(cx, self.pending.as_ref()))?;
378 self.pending = None;
379 Poll::Ready(Ok(()))
380 }
381
382 /// Consumes `self`, returning a future that returns `self` back once it is
383 /// ready to send a request.
384 ///
385 /// This function should be called before calling `send_request`.
386 ///
387 /// This is a functional combinator for [`poll_ready`]. The returned future
388 /// will call `SendStream::poll_ready` until `Ready`, then returns `self` to
389 /// the caller.
390 ///
391 /// # Examples
392 ///
393 /// ```rust
394 /// # use miku_h2::client::*;
395 /// # use http::*;
396 /// # async fn doc(send_request: SendRequest<&'static [u8]>)
397 /// # {
398 /// // First, wait until the `send_request` handle is ready to send a new
399 /// // request
400 /// let mut send_request = send_request.ready().await.unwrap();
401 /// // Use `send_request` here.
402 /// # }
403 /// # pub fn main() {}
404 /// ```
405 ///
406 /// See [module] level docs for more details.
407 ///
408 /// [`poll_ready`]: #method.poll_ready
409 /// [module]: index.html
410 pub fn ready(self) -> ReadySendRequest<B> {
411 ReadySendRequest { inner: Some(self) }
412 }
413
414 /// Sends a HTTP/2 request to the server.
415 ///
416 /// `send_request` initializes a new HTTP/2 stream on the associated
417 /// connection, then sends the given request using this new stream. Only the
418 /// request head is sent.
419 ///
420 /// On success, a [`ResponseFuture`] instance and [`SendStream`] instance
421 /// are returned. The [`ResponseFuture`] instance is used to get the
422 /// server's response and the [`SendStream`] instance is used to send a
423 /// request body or trailers to the server over the same HTTP/2 stream.
424 ///
425 /// To send a request body or trailers, set `end_of_stream` to `false`.
426 /// Then, use the returned [`SendStream`] instance to stream request body
427 /// chunks or send trailers. If `end_of_stream` is **not** set to `false`
428 /// then attempting to call [`SendStream::send_data`] or
429 /// [`SendStream::send_trailers`] will result in an error.
430 ///
431 /// If no request body or trailers are to be sent, set `end_of_stream` to
432 /// `true` and drop the returned [`SendStream`] instance.
433 ///
434 /// # A note on HTTP versions
435 ///
436 /// The provided `Request` will be encoded differently depending on the
437 /// value of its version field. If the version is set to 2.0, then the
438 /// request is encoded as per the specification recommends.
439 ///
440 /// If the version is set to a lower value, then the request is encoded to
441 /// preserve the characteristics of HTTP 1.1 and lower. Specifically, host
442 /// headers are permitted and the `:authority` pseudo header is not
443 /// included.
444 ///
445 /// The caller should always set the request's version field to 2.0 unless
446 /// specifically transmitting an HTTP 1.1 request over 2.0.
447 ///
448 /// # Examples
449 ///
450 /// Sending a request with no body
451 ///
452 /// ```rust
453 /// # use miku_h2::client::*;
454 /// # use http::*;
455 /// # async fn doc(send_request: SendRequest<&'static [u8]>)
456 /// # {
457 /// // First, wait until the `send_request` handle is ready to send a new
458 /// // request
459 /// let mut send_request = send_request.ready().await.unwrap();
460 /// // Prepare the HTTP request to send to the server.
461 /// let request = Request::get("https://www.example.com/")
462 /// .body(())
463 /// .unwrap();
464 ///
465 /// // Send the request to the server. Since we are not sending a
466 /// // body or trailers, we can drop the `SendStream` instance.
467 /// let (response, _) = send_request.send_request(request, true).unwrap();
468 /// let response = response.await.unwrap();
469 /// // Process the response
470 /// # }
471 /// # pub fn main() {}
472 /// ```
473 ///
474 /// Sending a request with a body and trailers
475 ///
476 /// ```rust
477 /// # use miku_h2::client::*;
478 /// # use http::*;
479 /// # async fn doc(send_request: SendRequest<&'static [u8]>)
480 /// # {
481 /// // First, wait until the `send_request` handle is ready to send a new
482 /// // request
483 /// let mut send_request = send_request.ready().await.unwrap();
484 ///
485 /// // Prepare the HTTP request to send to the server.
486 /// let request = Request::get("https://www.example.com/")
487 /// .body(())
488 /// .unwrap();
489 ///
490 /// // Send the request to the server. If we are not sending a
491 /// // body or trailers, we can drop the `SendStream` instance.
492 /// let (response, mut send_stream) = send_request
493 /// .send_request(request, false).unwrap();
494 ///
495 /// // At this point, one option would be to wait for send capacity.
496 /// // Doing so would allow us to not hold data in memory that
497 /// // cannot be sent. However, this is not a requirement, so this
498 /// // example will skip that step. See `SendStream` documentation
499 /// // for more details.
500 /// send_stream.send_data(b"hello", false).unwrap();
501 /// send_stream.send_data(b"world", false).unwrap();
502 ///
503 /// // Send the trailers.
504 /// let mut trailers = HeaderMap::new();
505 /// trailers.insert(
506 /// header::HeaderName::from_bytes(b"my-trailer").unwrap(),
507 /// header::HeaderValue::from_bytes(b"hello").unwrap());
508 ///
509 /// send_stream.send_trailers(trailers).unwrap();
510 ///
511 /// let response = response.await.unwrap();
512 /// // Process the response
513 /// # }
514 /// # pub fn main() {}
515 /// ```
516 ///
517 /// [`ResponseFuture`]: struct.ResponseFuture.html
518 /// [`SendStream`]: ../struct.SendStream.html
519 /// [`SendStream::send_data`]: ../struct.SendStream.html#method.send_data
520 /// [`SendStream::send_trailers`]: ../struct.SendStream.html#method.send_trailers
521 pub fn send_request(
522 &mut self,
523 request: Request<()>,
524 end_of_stream: bool,
525 ) -> Result<(ResponseFuture, SendStream<B>), crate::Error> {
526 self.inner
527 .send_request(request, end_of_stream, self.pending.as_ref())
528 .map_err(Into::into)
529 .map(|(stream, is_full)| {
530 if stream.is_pending_open() && is_full {
531 // Only prevent sending another request when the request queue
532 // is not full.
533 self.pending = Some(stream.clone_to_opaque());
534 }
535
536 let response = ResponseFuture {
537 inner: stream.clone_to_opaque(),
538 push_promise_consumed: false,
539 };
540
541 let stream = SendStream::new(stream);
542
543 (response, stream)
544 })
545 }
546
547 /// Returns whether the [extended CONNECT protocol][1] is enabled or not.
548 ///
549 /// This setting is configured by the server peer by sending the
550 /// [`SETTINGS_ENABLE_CONNECT_PROTOCOL` parameter][2] in a `SETTINGS` frame.
551 /// This method returns the currently acknowledged value received from the
552 /// remote.
553 ///
554 /// [1]: https://datatracker.ietf.org/doc/html/rfc8441#section-4
555 /// [2]: https://datatracker.ietf.org/doc/html/rfc8441#section-3
556 pub fn is_extended_connect_protocol_enabled(&self) -> bool {
557 self.inner.is_extended_connect_protocol_enabled()
558 }
559
560 /// Returns the current max send streams
561 pub fn current_max_send_streams(&self) -> usize {
562 self.inner.current_max_send_streams()
563 }
564
565 /// Returns the current max recv streams
566 pub fn current_max_recv_streams(&self) -> usize {
567 self.inner.current_max_recv_streams()
568 }
569}
570
571impl<B> fmt::Debug for SendRequest<B>
572where
573 B: Buf,
574{
575 fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result {
576 fmt.debug_struct("SendRequest").finish()
577 }
578}
579
580impl<B> Clone for SendRequest<B>
581where
582 B: Buf,
583{
584 fn clone(&self) -> Self {
585 SendRequest {
586 inner: self.inner.clone(),
587 pending: None,
588 }
589 }
590}
591
592#[cfg(feature = "unstable")]
593impl<B> SendRequest<B>
594where
595 B: Buf,
596{
597 /// Returns the number of active streams.
598 ///
599 /// An active stream is a stream that has not yet transitioned to a closed
600 /// state.
601 pub fn num_active_streams(&self) -> usize {
602 self.inner.num_active_streams()
603 }
604
605 /// Returns the number of streams that are held in memory.
606 ///
607 /// A wired stream is a stream that is either active or is closed but must
608 /// stay in memory for some reason. For example, there are still outstanding
609 /// userspace handles pointing to the slot.
610 pub fn num_wired_streams(&self) -> usize {
611 self.inner.num_wired_streams()
612 }
613}
614
615// ===== impl ReadySendRequest =====
616
617impl<B> Future for ReadySendRequest<B>
618where
619 B: Buf,
620{
621 type Output = Result<SendRequest<B>, crate::Error>;
622
623 fn poll(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {
624 match &mut self.inner {
625 Some(send_request) => {
626 ready!(send_request.poll_ready(cx))?;
627 }
628 None => panic!("called `poll` after future completed"),
629 }
630
631 Poll::Ready(Ok(self.inner.take().unwrap()))
632 }
633}
634
635// ===== impl Builder =====
636
637impl Builder {
638 /// Returns a new client builder instance initialized with default
639 /// configuration values.
640 ///
641 /// Configuration methods can be chained on the return value.
642 ///
643 /// # Examples
644 ///
645 /// ```
646 /// # use tokio::io::{AsyncRead, AsyncWrite};
647 /// # use miku_h2::client::*;
648 /// # use bytes::Bytes;
649 /// #
650 /// # async fn doc<T: AsyncRead + AsyncWrite + Unpin>(my_io: T)
651 /// # -> Result<((SendRequest<Bytes>, Connection<T, Bytes>)), miku_h2::Error>
652 /// # {
653 /// // `client_fut` is a future representing the completion of the HTTP/2
654 /// // handshake.
655 /// let client_fut = Builder::new()
656 /// .initial_window_size(1_000_000)
657 /// .max_concurrent_streams(1000)
658 /// .handshake(my_io);
659 /// # client_fut.await
660 /// # }
661 /// #
662 /// # pub fn main() {}
663 /// ```
664 pub fn new() -> Builder {
665 Builder {
666 max_send_buffer_size: proto::DEFAULT_MAX_SEND_BUFFER_SIZE,
667 reset_stream_duration: Duration::from_secs(proto::DEFAULT_RESET_STREAM_SECS),
668 reset_stream_max: proto::DEFAULT_RESET_STREAM_MAX,
669 pending_accept_reset_stream_max: proto::DEFAULT_REMOTE_RESET_STREAM_MAX,
670 initial_target_connection_window_size: None,
671 initial_max_send_streams: usize::MAX,
672 settings: Default::default(),
673 headers_frame_pseudo_order: None,
674 headers_frame_priority: None,
675 virtual_streams_priorities: None,
676 stream_id: 1.into(),
677 local_max_error_reset_streams: Some(proto::DEFAULT_LOCAL_RESET_COUNT_MAX),
678 }
679 }
680
681 /// Indicates the initial window size (in octets) for stream-level
682 /// flow control for received data.
683 ///
684 /// The initial window of a stream is used as part of flow control. For more
685 /// details, see [`FlowControl`].
686 ///
687 /// The default value is 65,535.
688 ///
689 /// [`FlowControl`]: ../struct.FlowControl.html
690 ///
691 /// # Examples
692 ///
693 /// ```
694 /// # use tokio::io::{AsyncRead, AsyncWrite};
695 /// # use miku_h2::client::*;
696 /// # use bytes::Bytes;
697 /// #
698 /// # async fn doc<T: AsyncRead + AsyncWrite + Unpin>(my_io: T)
699 /// # -> Result<((SendRequest<Bytes>, Connection<T, Bytes>)), miku_h2::Error>
700 /// # {
701 /// // `client_fut` is a future representing the completion of the HTTP/2
702 /// // handshake.
703 /// let client_fut = Builder::new()
704 /// .initial_window_size(1_000_000)
705 /// .handshake(my_io);
706 /// # client_fut.await
707 /// # }
708 /// #
709 /// # pub fn main() {}
710 /// ```
711 pub fn initial_window_size(&mut self, size: u32) -> &mut Self {
712 self.settings.set_initial_window_size(Some(size));
713 self
714 }
715
716 /// Indicates the initial window size (in octets) for connection-level flow control
717 /// for received data.
718 ///
719 /// The initial window of a connection is used as part of flow control. For more details,
720 /// see [`FlowControl`].
721 ///
722 /// The default value is 65,535.
723 ///
724 /// [`FlowControl`]: ../struct.FlowControl.html
725 ///
726 /// # Examples
727 ///
728 /// ```
729 /// # use tokio::io::{AsyncRead, AsyncWrite};
730 /// # use miku_h2::client::*;
731 /// # use bytes::Bytes;
732 /// #
733 /// # async fn doc<T: AsyncRead + AsyncWrite + Unpin>(my_io: T)
734 /// # -> Result<((SendRequest<Bytes>, Connection<T, Bytes>)), miku_h2::Error>
735 /// # {
736 /// // `client_fut` is a future representing the completion of the HTTP/2
737 /// // handshake.
738 /// let client_fut = Builder::new()
739 /// .initial_connection_window_size(1_000_000)
740 /// .handshake(my_io);
741 /// # client_fut.await
742 /// # }
743 /// #
744 /// # pub fn main() {}
745 /// ```
746 pub fn initial_connection_window_size(&mut self, size: u32) -> &mut Self {
747 self.initial_target_connection_window_size = Some(size);
748 self
749 }
750
751 /// Indicates the size (in octets) of the largest HTTP/2 frame payload that the
752 /// configured client is able to accept.
753 ///
754 /// The sender may send data frames that are **smaller** than this value,
755 /// but any data larger than `max` will be broken up into multiple `DATA`
756 /// frames.
757 ///
758 /// The value **must** be between 16,384 and 16,777,215. The default value is 16,384.
759 ///
760 /// # Examples
761 ///
762 /// ```
763 /// # use tokio::io::{AsyncRead, AsyncWrite};
764 /// # use miku_h2::client::*;
765 /// # use bytes::Bytes;
766 /// #
767 /// # async fn doc<T: AsyncRead + AsyncWrite + Unpin>(my_io: T)
768 /// # -> Result<((SendRequest<Bytes>, Connection<T, Bytes>)), miku_h2::Error>
769 /// # {
770 /// // `client_fut` is a future representing the completion of the HTTP/2
771 /// // handshake.
772 /// let client_fut = Builder::new()
773 /// .max_frame_size(1_000_000)
774 /// .handshake(my_io);
775 /// # client_fut.await
776 /// # }
777 /// #
778 /// # pub fn main() {}
779 /// ```
780 ///
781 /// # Panics
782 ///
783 /// This function panics if `max` is not within the legal range specified
784 /// above.
785 pub fn max_frame_size(&mut self, max: u32) -> &mut Self {
786 self.settings.set_max_frame_size(Some(max));
787 self
788 }
789
790 /// Sets the max size of received header frames.
791 ///
792 /// This advisory setting informs a peer of the maximum size of header list
793 /// that the sender is prepared to accept, in octets. The value is based on
794 /// the uncompressed size of header fields, including the length of the name
795 /// and value in octets plus an overhead of 32 octets for each header field.
796 ///
797 /// This setting is also used to limit the maximum amount of data that is
798 /// buffered to decode HEADERS frames.
799 ///
800 /// # Examples
801 ///
802 /// ```
803 /// # use tokio::io::{AsyncRead, AsyncWrite};
804 /// # use miku_h2::client::*;
805 /// # use bytes::Bytes;
806 /// #
807 /// # async fn doc<T: AsyncRead + AsyncWrite + Unpin>(my_io: T)
808 /// # -> Result<((SendRequest<Bytes>, Connection<T, Bytes>)), miku_h2::Error>
809 /// # {
810 /// // `client_fut` is a future representing the completion of the HTTP/2
811 /// // handshake.
812 /// let client_fut = Builder::new()
813 /// .max_header_list_size(16 * 1024)
814 /// .handshake(my_io);
815 /// # client_fut.await
816 /// # }
817 /// #
818 /// # pub fn main() {}
819 /// ```
820 pub fn max_header_list_size(&mut self, max: u32) -> &mut Self {
821 self.settings.set_max_header_list_size(Some(max));
822 self
823 }
824
825 /// Sets the maximum number of concurrent streams.
826 ///
827 /// The maximum concurrent streams setting only controls the maximum number
828 /// of streams that can be initiated by the remote peer. In other words,
829 /// when this setting is set to 100, this does not limit the number of
830 /// concurrent streams that can be created by the caller.
831 ///
832 /// It is recommended that this value be no smaller than 100, so as to not
833 /// unnecessarily limit parallelism. However, any value is legal, including
834 /// 0. If `max` is set to 0, then the remote will not be permitted to
835 /// initiate streams.
836 ///
837 /// Note that streams in the reserved state, i.e., push promises that have
838 /// been reserved but the stream has not started, do not count against this
839 /// setting.
840 ///
841 /// Also note that if the remote *does* exceed the value set here, it is not
842 /// a protocol level error. Instead, the `h2` library will immediately reset
843 /// the stream.
844 ///
845 /// See [Section 5.1.2] in the HTTP/2 spec for more details.
846 ///
847 /// [Section 5.1.2]: https://http2.github.io/http2-spec/#rfc.section.5.1.2
848 ///
849 /// # Examples
850 ///
851 /// ```
852 /// # use tokio::io::{AsyncRead, AsyncWrite};
853 /// # use miku_h2::client::*;
854 /// # use bytes::Bytes;
855 /// #
856 /// # async fn doc<T: AsyncRead + AsyncWrite + Unpin>(my_io: T)
857 /// # -> Result<((SendRequest<Bytes>, Connection<T, Bytes>)), miku_h2::Error>
858 /// # {
859 /// // `client_fut` is a future representing the completion of the HTTP/2
860 /// // handshake.
861 /// let client_fut = Builder::new()
862 /// .max_concurrent_streams(1000)
863 /// .handshake(my_io);
864 /// # client_fut.await
865 /// # }
866 /// #
867 /// # pub fn main() {}
868 /// ```
869 pub fn max_concurrent_streams(&mut self, max: u32) -> &mut Self {
870 self.settings.set_max_concurrent_streams(Some(max));
871 self
872 }
873
874 /// Sets the initial maximum of locally initiated (send) streams.
875 ///
876 /// The initial settings will be overwritten by the remote peer when
877 /// the SETTINGS frame is received. The new value will be set to the
878 /// `max_concurrent_streams()` from the frame. If no value is advertised in
879 /// the initial SETTINGS frame from the remote peer as part of
880 /// [HTTP/2 Connection Preface], `usize::MAX` will be set.
881 ///
882 /// This setting prevents the caller from exceeding this number of
883 /// streams that are counted towards the concurrency limit.
884 ///
885 /// Sending streams past the limit returned by the peer will be treated
886 /// as a stream error of type PROTOCOL_ERROR or REFUSED_STREAM.
887 ///
888 /// See [Section 5.1.2] in the HTTP/2 spec for more details.
889 ///
890 /// The default value is `usize::MAX`.
891 ///
892 /// [HTTP/2 Connection Preface]: https://httpwg.org/specs/rfc9113.html#preface
893 /// [Section 5.1.2]: https://httpwg.org/specs/rfc9113.html#rfc.section.5.1.2
894 ///
895 /// # Examples
896 ///
897 /// ```
898 /// # use tokio::io::{AsyncRead, AsyncWrite};
899 /// # use miku_h2::client::*;
900 /// # use bytes::Bytes;
901 /// #
902 /// # async fn doc<T: AsyncRead + AsyncWrite + Unpin>(my_io: T)
903 /// # -> Result<((SendRequest<Bytes>, Connection<T, Bytes>)), miku_h2::Error>
904 /// # {
905 /// // `client_fut` is a future representing the completion of the HTTP/2
906 /// // handshake.
907 /// let client_fut = Builder::new()
908 /// .initial_max_send_streams(1000)
909 /// .handshake(my_io);
910 /// # client_fut.await
911 /// # }
912 /// #
913 /// # pub fn main() {}
914 /// ```
915 pub fn initial_max_send_streams(&mut self, initial: usize) -> &mut Self {
916 self.initial_max_send_streams = initial;
917 self
918 }
919
920 /// Sets the maximum number of concurrent locally reset streams.
921 ///
922 /// When a stream is explicitly reset, the HTTP/2 specification requires
923 /// that any further frames received for that stream must be ignored for
924 /// "some time".
925 ///
926 /// In order to satisfy the specification, internal state must be maintained
927 /// to implement the behavior. This state grows linearly with the number of
928 /// streams that are locally reset.
929 ///
930 /// The `max_concurrent_reset_streams` setting configures sets an upper
931 /// bound on the amount of state that is maintained. When this max value is
932 /// reached, the oldest reset stream is purged from memory.
933 ///
934 /// Once the stream has been fully purged from memory, any additional frames
935 /// received for that stream will result in a connection level protocol
936 /// error, forcing the connection to terminate.
937 ///
938 /// The default value is 10.
939 ///
940 /// # Examples
941 ///
942 /// ```
943 /// # use tokio::io::{AsyncRead, AsyncWrite};
944 /// # use miku_h2::client::*;
945 /// # use bytes::Bytes;
946 /// #
947 /// # async fn doc<T: AsyncRead + AsyncWrite + Unpin>(my_io: T)
948 /// # -> Result<((SendRequest<Bytes>, Connection<T, Bytes>)), miku_h2::Error>
949 /// # {
950 /// // `client_fut` is a future representing the completion of the HTTP/2
951 /// // handshake.
952 /// let client_fut = Builder::new()
953 /// .max_concurrent_reset_streams(1000)
954 /// .handshake(my_io);
955 /// # client_fut.await
956 /// # }
957 /// #
958 /// # pub fn main() {}
959 /// ```
960 pub fn max_concurrent_reset_streams(&mut self, max: usize) -> &mut Self {
961 self.reset_stream_max = max;
962 self
963 }
964
965 /// Sets the duration to remember locally reset streams.
966 ///
967 /// When a stream is explicitly reset, the HTTP/2 specification requires
968 /// that any further frames received for that stream must be ignored for
969 /// "some time".
970 ///
971 /// In order to satisfy the specification, internal state must be maintained
972 /// to implement the behavior. This state grows linearly with the number of
973 /// streams that are locally reset.
974 ///
975 /// The `reset_stream_duration` setting configures the max amount of time
976 /// this state will be maintained in memory. Once the duration elapses, the
977 /// stream state is purged from memory.
978 ///
979 /// Once the stream has been fully purged from memory, any additional frames
980 /// received for that stream will result in a connection level protocol
981 /// error, forcing the connection to terminate.
982 ///
983 /// The default value is 30 seconds.
984 ///
985 /// # Examples
986 ///
987 /// ```
988 /// # use tokio::io::{AsyncRead, AsyncWrite};
989 /// # use miku_h2::client::*;
990 /// # use std::time::Duration;
991 /// # use bytes::Bytes;
992 /// #
993 /// # async fn doc<T: AsyncRead + AsyncWrite + Unpin>(my_io: T)
994 /// # -> Result<((SendRequest<Bytes>, Connection<T, Bytes>)), miku_h2::Error>
995 /// # {
996 /// // `client_fut` is a future representing the completion of the HTTP/2
997 /// // handshake.
998 /// let client_fut = Builder::new()
999 /// .reset_stream_duration(Duration::from_secs(10))
1000 /// .handshake(my_io);
1001 /// # client_fut.await
1002 /// # }
1003 /// #
1004 /// # pub fn main() {}
1005 /// ```
1006 pub fn reset_stream_duration(&mut self, dur: Duration) -> &mut Self {
1007 self.reset_stream_duration = dur;
1008 self
1009 }
1010
1011 /// Sets the maximum number of local resets due to protocol errors made by the remote end.
1012 ///
1013 /// Invalid frames and many other protocol errors will lead to resets being generated for those streams.
1014 /// Too many of these often indicate a malicious client, and there are attacks which can abuse this to DOS servers.
1015 /// This limit protects against these DOS attacks by limiting the amount of resets we can be forced to generate.
1016 ///
1017 /// When the number of local resets exceeds this threshold, the client will close the connection.
1018 ///
1019 /// If you really want to disable this, supply [`Option::None`] here.
1020 /// Disabling this is not recommended and may expose you to DOS attacks.
1021 ///
1022 /// The default value is currently 1024, but could change.
1023 pub fn max_local_error_reset_streams(&mut self, max: Option<usize>) -> &mut Self {
1024 self.local_max_error_reset_streams = max;
1025 self
1026 }
1027
1028 /// Sets the maximum number of pending-accept remotely-reset streams.
1029 ///
1030 /// Streams that have been received by the peer, but not accepted by the
1031 /// user, can also receive a RST_STREAM. This is a legitimate pattern: one
1032 /// could send a request and then shortly after, realize it is not needed,
1033 /// sending a CANCEL.
1034 ///
1035 /// However, since those streams are now "closed", they don't count towards
1036 /// the max concurrent streams. So, they will sit in the accept queue,
1037 /// using memory.
1038 ///
1039 /// When the number of remotely-reset streams sitting in the pending-accept
1040 /// queue reaches this maximum value, a connection error with the code of
1041 /// `ENHANCE_YOUR_CALM` will be sent to the peer, and returned by the
1042 /// `Future`.
1043 ///
1044 /// The default value is currently 20, but could change.
1045 ///
1046 /// # Examples
1047 ///
1048 /// ```
1049 /// # use tokio::io::{AsyncRead, AsyncWrite};
1050 /// # use miku_h2::client::*;
1051 /// # use bytes::Bytes;
1052 /// #
1053 /// # async fn doc<T: AsyncRead + AsyncWrite + Unpin>(my_io: T)
1054 /// # -> Result<((SendRequest<Bytes>, Connection<T, Bytes>)), miku_h2::Error>
1055 /// # {
1056 /// // `client_fut` is a future representing the completion of the HTTP/2
1057 /// // handshake.
1058 /// let client_fut = Builder::new()
1059 /// .max_pending_accept_reset_streams(100)
1060 /// .handshake(my_io);
1061 /// # client_fut.await
1062 /// # }
1063 /// #
1064 /// # pub fn main() {}
1065 /// ```
1066 pub fn max_pending_accept_reset_streams(&mut self, max: usize) -> &mut Self {
1067 self.pending_accept_reset_stream_max = max;
1068 self
1069 }
1070
1071 /// Sets the maximum send buffer size per stream.
1072 ///
1073 /// Once a stream has buffered up to (or over) the maximum, the stream's
1074 /// flow control will not "poll" additional capacity. Once bytes for the
1075 /// stream have been written to the connection, the send buffer capacity
1076 /// will be freed up again.
1077 ///
1078 /// The default is currently ~400KB, but may change.
1079 ///
1080 /// # Panics
1081 ///
1082 /// This function panics if `max` is larger than `u32::MAX`.
1083 pub fn max_send_buffer_size(&mut self, max: usize) -> &mut Self {
1084 assert!(max <= u32::MAX as usize);
1085 self.max_send_buffer_size = max;
1086 self
1087 }
1088
1089 /// Enables or disables server push promises.
1090 ///
1091 /// This value is included in the initial SETTINGS handshake.
1092 /// Setting this value to value to
1093 /// false in the initial SETTINGS handshake guarantees that the remote server
1094 /// will never send a push promise.
1095 ///
1096 /// This setting can be changed during the life of a single HTTP/2
1097 /// connection by sending another settings frame updating the value.
1098 ///
1099 /// Default value: `true`.
1100 ///
1101 /// # Examples
1102 ///
1103 /// ```
1104 /// # use tokio::io::{AsyncRead, AsyncWrite};
1105 /// # use miku_h2::client::*;
1106 /// # use std::time::Duration;
1107 /// # use bytes::Bytes;
1108 /// #
1109 /// # async fn doc<T: AsyncRead + AsyncWrite + Unpin>(my_io: T)
1110 /// # -> Result<((SendRequest<Bytes>, Connection<T, Bytes>)), miku_h2::Error>
1111 /// # {
1112 /// // `client_fut` is a future representing the completion of the HTTP/2
1113 /// // handshake.
1114 /// let client_fut = Builder::new()
1115 /// .enable_push(false)
1116 /// .handshake(my_io);
1117 /// # client_fut.await
1118 /// # }
1119 /// #
1120 /// # pub fn main() {}
1121 /// ```
1122 pub fn enable_push(&mut self, enabled: bool) -> &mut Self {
1123 self.settings.set_enable_push(enabled);
1124 self
1125 }
1126
1127 /// Sets the header table size.
1128 ///
1129 /// This setting informs the peer of the maximum size of the header compression
1130 /// table used to encode header blocks, in octets. The encoder may select any value
1131 /// equal to or less than the header table size specified by the sender.
1132 ///
1133 /// The default value is 4,096.
1134 ///
1135 /// # Examples
1136 ///
1137 /// ```
1138 /// # use tokio::io::{AsyncRead, AsyncWrite};
1139 /// # use miku_h2::client::*;
1140 /// # use bytes::Bytes;
1141 /// #
1142 /// # async fn doc<T: AsyncRead + AsyncWrite + Unpin>(my_io: T)
1143 /// # -> Result<((SendRequest<Bytes>, Connection<T, Bytes>)), miku_h2::Error>
1144 /// # {
1145 /// // `client_fut` is a future representing the completion of the HTTP/2
1146 /// // handshake.
1147 /// let client_fut = Builder::new()
1148 /// .header_table_size(1_000_000)
1149 /// .handshake(my_io);
1150 /// # client_fut.await
1151 /// # }
1152 /// #
1153 /// # pub fn main() {}
1154 /// ```
1155 pub fn header_table_size(&mut self, size: u32) -> &mut Self {
1156 self.settings.set_header_table_size(Some(size));
1157 self
1158 }
1159
1160 /// Sets the `Headers` frame pseudo order.
1161 ///
1162 /// This is mostly used when impersonating HTTP2 fingerprint.
1163 ///
1164 /// The default value is `None`, let `h2` decide.
1165 pub fn headers_frame_pseudo_order(
1166 &mut self,
1167 order: Option<&'static [PseudoType; 4]>,
1168 ) -> &mut Self {
1169 self.headers_frame_pseudo_order = order;
1170 self
1171 }
1172
1173 /// Sets the `Headers` frame priority.
1174 ///
1175 /// This is mostly used when impersonating HTTP2 fingerprint.
1176 ///
1177 /// The default value is `None`, let `h2` decide.
1178 pub fn headers_frame_priority(&mut self, priority: Option<StreamDependency>) -> &mut Self {
1179 self.headers_frame_priority = priority;
1180 self
1181 }
1182
1183 /// Sets the `Priority` frames (settings) for virtual streams.
1184 ///
1185 /// This is mostly used when impersonating HTTP2 fingerprint and pairs with [`initial_stream_id`](Self::initial_stream_id).
1186 ///
1187 /// The default value is `None`.
1188 pub fn virtual_streams_priorities(
1189 &mut self,
1190 priorities: Option<&'static [Priority]>,
1191 ) -> &mut Self {
1192 self.virtual_streams_priorities = priorities;
1193 self
1194 }
1195
1196 /// Sets the first stream ID to something other than 1.
1197 #[cfg(feature = "unstable")]
1198 pub fn initial_stream_id(&mut self, stream_id: u32) -> &mut Self {
1199 self.stream_id = stream_id.into();
1200 assert!(
1201 self.stream_id.is_client_initiated(),
1202 "stream id must be odd"
1203 );
1204 self
1205 }
1206
1207 /// Creates a new configured HTTP/2 client backed by `io`.
1208 ///
1209 /// It is expected that `io` already be in an appropriate state to commence
1210 /// the [HTTP/2 handshake]. The handshake is completed once both the connection
1211 /// preface and the initial settings frame is sent by the client.
1212 ///
1213 /// The handshake future does not wait for the initial settings frame from the
1214 /// server.
1215 ///
1216 /// Returns a future which resolves to the [`Connection`] / [`SendRequest`]
1217 /// tuple once the HTTP/2 handshake has been completed.
1218 ///
1219 /// This function also allows the caller to configure the send payload data
1220 /// type. See [Outbound data type] for more details.
1221 ///
1222 /// [HTTP/2 handshake]: http://httpwg.org/specs/rfc7540.html#ConnectionHeader
1223 /// [`Connection`]: struct.Connection.html
1224 /// [`SendRequest`]: struct.SendRequest.html
1225 /// [Outbound data type]: ../index.html#outbound-data-type.
1226 ///
1227 /// # Examples
1228 ///
1229 /// Basic usage:
1230 ///
1231 /// ```
1232 /// # use tokio::io::{AsyncRead, AsyncWrite};
1233 /// # use miku_h2::client::*;
1234 /// # use bytes::Bytes;
1235 /// #
1236 /// # async fn doc<T: AsyncRead + AsyncWrite + Unpin>(my_io: T)
1237 /// -> Result<((SendRequest<Bytes>, Connection<T, Bytes>)), miku_h2::Error>
1238 /// # {
1239 /// // `client_fut` is a future representing the completion of the HTTP/2
1240 /// // handshake.
1241 /// let client_fut = Builder::new()
1242 /// .handshake(my_io);
1243 /// # client_fut.await
1244 /// # }
1245 /// #
1246 /// # pub fn main() {}
1247 /// ```
1248 ///
1249 /// Configures the send-payload data type. In this case, the outbound data
1250 /// type will be `&'static [u8]`.
1251 ///
1252 /// ```
1253 /// # use tokio::io::{AsyncRead, AsyncWrite};
1254 /// # use miku_h2::client::*;
1255 /// #
1256 /// # async fn doc<T: AsyncRead + AsyncWrite + Unpin>(my_io: T)
1257 /// # -> Result<((SendRequest<&'static [u8]>, Connection<T, &'static [u8]>)), miku_h2::Error>
1258 /// # {
1259 /// // `client_fut` is a future representing the completion of the HTTP/2
1260 /// // handshake.
1261 /// let client_fut = Builder::new()
1262 /// .handshake::<_, &'static [u8]>(my_io);
1263 /// # client_fut.await
1264 /// # }
1265 /// #
1266 /// # pub fn main() {}
1267 /// ```
1268 pub fn handshake<T, B>(
1269 &self,
1270 io: T,
1271 ) -> impl Future<Output = Result<(SendRequest<B>, Connection<T, B>), crate::Error>>
1272 where
1273 T: AsyncRead + AsyncWrite + Unpin,
1274 B: Buf,
1275 {
1276 Connection::handshake2(io, self.clone())
1277 }
1278}
1279
1280impl Default for Builder {
1281 fn default() -> Builder {
1282 Builder::new()
1283 }
1284}
1285
1286/// Creates a new configured HTTP/2 client with default configuration
1287/// values backed by `io`.
1288///
1289/// It is expected that `io` already be in an appropriate state to commence
1290/// the [HTTP/2 handshake]. See [Handshake] for more details.
1291///
1292/// Returns a future which resolves to the [`Connection`] / [`SendRequest`]
1293/// tuple once the HTTP/2 handshake has been completed. The returned
1294/// [`Connection`] instance will be using default configuration values. Use
1295/// [`Builder`] to customize the configuration values used by a [`Connection`]
1296/// instance.
1297///
1298/// [HTTP/2 handshake]: http://httpwg.org/specs/rfc7540.html#ConnectionHeader
1299/// [Handshake]: ../index.html#handshake
1300/// [`Connection`]: struct.Connection.html
1301/// [`SendRequest`]: struct.SendRequest.html
1302///
1303/// # Examples
1304///
1305/// ```
1306/// # use tokio::io::{AsyncRead, AsyncWrite};
1307/// # use miku_h2::client;
1308/// # use miku_h2::client::*;
1309/// #
1310/// # async fn doc<T: AsyncRead + AsyncWrite + Unpin>(my_io: T) -> Result<(), miku_h2::Error>
1311/// # {
1312/// let (send_request, connection) = client::handshake(my_io).await?;
1313/// // The HTTP/2 handshake has completed, now start polling
1314/// // `connection` and use `send_request` to send requests to the
1315/// // server.
1316/// # Ok(())
1317/// # }
1318/// #
1319/// # pub fn main() {}
1320/// ```
1321pub async fn handshake<T>(io: T) -> Result<(SendRequest<Bytes>, Connection<T, Bytes>), crate::Error>
1322where
1323 T: AsyncRead + AsyncWrite + Unpin,
1324{
1325 let builder = Builder::new();
1326 builder
1327 .handshake(io)
1328 .instrument(tracing::trace_span!("client_handshake"))
1329 .await
1330}
1331
1332// ===== impl Connection =====
1333
1334async fn bind_connection<T>(io: &mut T) -> Result<(), crate::Error>
1335where
1336 T: AsyncRead + AsyncWrite + Unpin,
1337{
1338 tracing::debug!("binding client connection");
1339
1340 let msg: &'static [u8] = b"PRI * HTTP/2.0\r\n\r\nSM\r\n\r\n";
1341 io.write_all(msg).await.map_err(crate::Error::from_io)?;
1342
1343 tracing::debug!("client connection bound");
1344
1345 Ok(())
1346}
1347
1348impl<T, B> Connection<T, B>
1349where
1350 T: AsyncRead + AsyncWrite + Unpin,
1351 B: Buf,
1352{
1353 async fn handshake2(
1354 mut io: T,
1355 builder: Builder,
1356 ) -> Result<(SendRequest<B>, Connection<T, B>), crate::Error> {
1357 bind_connection(&mut io).await?;
1358
1359 // Create the codec
1360 let mut codec = Codec::new(io);
1361
1362 if let Some(max) = builder.settings.max_frame_size() {
1363 codec.set_max_recv_frame_size(max as usize);
1364 }
1365
1366 if let Some(max) = builder.settings.max_header_list_size() {
1367 codec.set_max_recv_header_list_size(max as usize);
1368 }
1369
1370 // Send initial settings frame
1371 codec
1372 .buffer(builder.settings.clone().into())
1373 .expect("invalid SETTINGS frame");
1374
1375 let inner = proto::Connection::new(
1376 codec,
1377 proto::Config {
1378 next_stream_id: builder.stream_id,
1379 initial_max_send_streams: builder.initial_max_send_streams,
1380 max_send_buffer_size: builder.max_send_buffer_size,
1381 reset_stream_duration: builder.reset_stream_duration,
1382 reset_stream_max: builder.reset_stream_max,
1383 remote_reset_stream_max: builder.pending_accept_reset_stream_max,
1384 local_error_reset_streams_max: builder.local_max_error_reset_streams,
1385 settings: builder.settings.clone(),
1386 headers_frame_pseudo_order: builder.headers_frame_pseudo_order,
1387 headers_frame_priority: builder.headers_frame_priority,
1388 virtual_streams_priorities: builder.virtual_streams_priorities,
1389 },
1390 );
1391 let send_request = SendRequest {
1392 inner: inner.streams().clone(),
1393 pending: None,
1394 };
1395
1396 let mut connection = Connection { inner };
1397 if let Some(sz) = builder.initial_target_connection_window_size {
1398 connection.set_target_window_size(sz);
1399 }
1400
1401 Ok((send_request, connection))
1402 }
1403
1404 /// Sets the target window size for the whole connection.
1405 ///
1406 /// If `size` is greater than the current value, then a `WINDOW_UPDATE`
1407 /// frame will be immediately sent to the remote, increasing the connection
1408 /// level window by `size - current_value`.
1409 ///
1410 /// If `size` is less than the current value, nothing will happen
1411 /// immediately. However, as window capacity is released by
1412 /// [`FlowControl`] instances, no `WINDOW_UPDATE` frames will be sent
1413 /// out until the number of "in flight" bytes drops below `size`.
1414 ///
1415 /// The default value is 65,535.
1416 ///
1417 /// See [`FlowControl`] documentation for more details.
1418 ///
1419 /// [`FlowControl`]: ../struct.FlowControl.html
1420 /// [library level]: ../index.html#flow-control
1421 pub fn set_target_window_size(&mut self, size: u32) {
1422 assert!(size <= proto::MAX_WINDOW_SIZE);
1423 self.inner.set_target_window_size(size);
1424 }
1425
1426 /// Set a new `INITIAL_WINDOW_SIZE` setting (in octets) for stream-level
1427 /// flow control for received data.
1428 ///
1429 /// The `SETTINGS` will be sent to the remote, and only applied once the
1430 /// remote acknowledges the change.
1431 ///
1432 /// This can be used to increase or decrease the window size for existing
1433 /// streams.
1434 ///
1435 /// # Errors
1436 ///
1437 /// Returns an error if a previous call is still pending acknowledgement
1438 /// from the remote endpoint.
1439 pub fn set_initial_window_size(&mut self, size: u32) -> Result<(), crate::Error> {
1440 assert!(size <= proto::MAX_WINDOW_SIZE);
1441 self.inner.set_initial_window_size(size)?;
1442 Ok(())
1443 }
1444
1445 /// Takes a `PingPong` instance from the connection.
1446 ///
1447 /// # Note
1448 ///
1449 /// This may only be called once. Calling multiple times will return `None`.
1450 pub fn ping_pong(&mut self) -> Option<PingPong> {
1451 self.inner.take_user_pings().map(PingPong::new)
1452 }
1453
1454 /// Returns the maximum number of concurrent streams that may be initiated
1455 /// by this client.
1456 ///
1457 /// This limit is configured by the server peer by sending the
1458 /// [`SETTINGS_MAX_CONCURRENT_STREAMS` parameter][1] in a `SETTINGS` frame.
1459 /// This method returns the currently acknowledged value received from the
1460 /// remote.
1461 ///
1462 /// [1]: https://tools.ietf.org/html/rfc7540#section-5.1.2
1463 pub fn max_concurrent_send_streams(&self) -> usize {
1464 self.inner.max_send_streams()
1465 }
1466 /// Returns the maximum number of concurrent streams that may be initiated
1467 /// by the server on this connection.
1468 ///
1469 /// This returns the value of the [`SETTINGS_MAX_CONCURRENT_STREAMS`
1470 /// parameter][1] sent in a `SETTINGS` frame that has been
1471 /// acknowledged by the remote peer. The value to be sent is configured by
1472 /// the [`Builder::max_concurrent_streams`][2] method before handshaking
1473 /// with the remote peer.
1474 ///
1475 /// [1]: https://tools.ietf.org/html/rfc7540#section-5.1.2
1476 /// [2]: ../struct.Builder.html#method.max_concurrent_streams
1477 pub fn max_concurrent_recv_streams(&self) -> usize {
1478 self.inner.max_recv_streams()
1479 }
1480}
1481
1482impl<T, B> Future for Connection<T, B>
1483where
1484 T: AsyncRead + AsyncWrite + Unpin,
1485 B: Buf,
1486{
1487 type Output = Result<(), crate::Error>;
1488
1489 fn poll(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {
1490 self.inner.maybe_close_connection_if_no_streams();
1491 let result = self.inner.poll(cx).map_err(Into::into);
1492 if result.is_pending() && !self.inner.has_streams_or_other_references() {
1493 tracing::trace!("last stream closed during poll, wake again");
1494 cx.waker().wake_by_ref();
1495 }
1496 result
1497 }
1498}
1499
1500impl<T, B> fmt::Debug for Connection<T, B>
1501where
1502 T: AsyncRead + AsyncWrite,
1503 T: fmt::Debug,
1504 B: fmt::Debug + Buf,
1505{
1506 fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result {
1507 fmt::Debug::fmt(&self.inner, fmt)
1508 }
1509}
1510
1511// ===== impl ResponseFuture =====
1512
1513impl Future for ResponseFuture {
1514 type Output = Result<Response<RecvStream>, crate::Error>;
1515
1516 fn poll(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {
1517 let (parts, _) = ready!(self.inner.poll_response(cx))?.into_parts();
1518 let body = RecvStream::new(FlowControl::new(self.inner.clone()));
1519
1520 Poll::Ready(Ok(Response::from_parts(parts, body)))
1521 }
1522}
1523
1524impl ResponseFuture {
1525 /// Returns the stream ID of the response stream.
1526 ///
1527 /// # Panics
1528 ///
1529 /// If the lock on the stream store has been poisoned.
1530 pub fn stream_id(&self) -> crate::StreamId {
1531 crate::StreamId::from_internal(self.inner.stream_id())
1532 }
1533 /// Returns a stream of PushPromises
1534 ///
1535 /// # Panics
1536 ///
1537 /// If this method has been called before
1538 /// or the stream was itself was pushed
1539 pub fn push_promises(&mut self) -> PushPromises {
1540 if self.push_promise_consumed {
1541 panic!("Reference to push promises stream taken!");
1542 }
1543 self.push_promise_consumed = true;
1544 PushPromises {
1545 inner: self.inner.clone(),
1546 }
1547 }
1548}
1549
1550// ===== impl PushPromises =====
1551
1552impl PushPromises {
1553 /// Get the next `PushPromise`.
1554 pub async fn push_promise(&mut self) -> Option<Result<PushPromise, crate::Error>> {
1555 crate::poll_fn(move |cx| self.poll_push_promise(cx)).await
1556 }
1557
1558 #[doc(hidden)]
1559 pub fn poll_push_promise(
1560 &mut self,
1561 cx: &mut Context<'_>,
1562 ) -> Poll<Option<Result<PushPromise, crate::Error>>> {
1563 match self.inner.poll_pushed(cx) {
1564 Poll::Ready(Some(Ok((request, response)))) => {
1565 let response = PushedResponseFuture {
1566 inner: ResponseFuture {
1567 inner: response,
1568 push_promise_consumed: false,
1569 },
1570 };
1571 Poll::Ready(Some(Ok(PushPromise { request, response })))
1572 }
1573 Poll::Ready(Some(Err(e))) => Poll::Ready(Some(Err(e.into()))),
1574 Poll::Ready(None) => Poll::Ready(None),
1575 Poll::Pending => Poll::Pending,
1576 }
1577 }
1578}
1579
1580#[cfg(feature = "stream")]
1581impl futures_core::Stream for PushPromises {
1582 type Item = Result<PushPromise, crate::Error>;
1583
1584 fn poll_next(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {
1585 self.poll_push_promise(cx)
1586 }
1587}
1588
1589// ===== impl PushPromise =====
1590
1591impl PushPromise {
1592 /// Returns a reference to the push promise's request headers.
1593 pub fn request(&self) -> &Request<()> {
1594 &self.request
1595 }
1596
1597 /// Returns a mutable reference to the push promise's request headers.
1598 pub fn request_mut(&mut self) -> &mut Request<()> {
1599 &mut self.request
1600 }
1601
1602 /// Consumes `self`, returning the push promise's request headers and
1603 /// response future.
1604 pub fn into_parts(self) -> (Request<()>, PushedResponseFuture) {
1605 (self.request, self.response)
1606 }
1607}
1608
1609// ===== impl PushedResponseFuture =====
1610
1611impl Future for PushedResponseFuture {
1612 type Output = Result<Response<RecvStream>, crate::Error>;
1613
1614 fn poll(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {
1615 Pin::new(&mut self.inner).poll(cx)
1616 }
1617}
1618
1619impl PushedResponseFuture {
1620 /// Returns the stream ID of the response stream.
1621 ///
1622 /// # Panics
1623 ///
1624 /// If the lock on the stream store has been poisoned.
1625 pub fn stream_id(&self) -> crate::StreamId {
1626 self.inner.stream_id()
1627 }
1628}
1629
1630// ===== impl Peer =====
1631
1632impl Peer {
1633 pub fn convert_send_message(
1634 id: StreamId,
1635 request: Request<()>,
1636 protocol: Option<Protocol>,
1637 headers_frame_pseudo_order: Option<&'static [PseudoType; 4]>,
1638 headers_frame_priority: Option<StreamDependency>,
1639 end_of_stream: bool,
1640 ) -> Result<Headers, SendError> {
1641 use http::request::Parts;
1642
1643 let (
1644 Parts {
1645 method,
1646 uri,
1647 headers,
1648 version,
1649 ..
1650 },
1651 _,
1652 ) = request.into_parts();
1653
1654 let is_connect = method == Method::CONNECT;
1655
1656 // Build the set pseudo header set. All requests will include `method`
1657 // and `path`.
1658 let mut pseudo = Pseudo::request(method, uri, protocol);
1659
1660 pseudo.set_order(headers_frame_pseudo_order);
1661
1662 if pseudo.scheme.is_none() {
1663 // If the scheme is not set, then there are a two options.
1664 //
1665 // 1) Authority is not set. In this case, a request was issued with
1666 // a relative URI. This is permitted **only** when forwarding
1667 // HTTP 1.x requests. If the HTTP version is set to 2.0, then
1668 // this is an error.
1669 //
1670 // 2) Authority is set, then the HTTP method *must* be CONNECT.
1671 //
1672 // It is not possible to have a scheme but not an authority set (the
1673 // `http` crate does not allow it).
1674 //
1675 if pseudo.authority.is_none() {
1676 if version == Version::HTTP_2 {
1677 return Err(UserError::MissingUriSchemeAndAuthority.into());
1678 } else {
1679 // This is acceptable as per the above comment. However,
1680 // HTTP/2 requires that a scheme is set. Since we are
1681 // forwarding an HTTP 1.1 request, the scheme is set to
1682 // "http".
1683 pseudo.set_scheme(uri::Scheme::HTTP);
1684 }
1685 } else if !is_connect {
1686 // TODO: Error
1687 }
1688 }
1689
1690 // Create the HEADERS frame
1691 let mut frame = Headers::new(id, headers_frame_priority, pseudo, headers);
1692
1693 if end_of_stream {
1694 frame.set_end_stream()
1695 }
1696
1697 Ok(frame)
1698 }
1699}
1700
1701impl proto::Peer for Peer {
1702 type Poll = Response<()>;
1703
1704 const NAME: &'static str = "Client";
1705
1706 fn r#dyn() -> proto::DynPeer {
1707 proto::DynPeer::Client
1708 }
1709
1710 /*
1711 fn is_server() -> bool {
1712 false
1713 }
1714 */
1715
1716 fn convert_poll_message(
1717 pseudo: Pseudo,
1718 fields: HeaderMap,
1719 stream_id: StreamId,
1720 ) -> Result<Self::Poll, Error> {
1721 let mut b = Response::builder();
1722
1723 b = b.version(Version::HTTP_2);
1724
1725 if let Some(status) = pseudo.status {
1726 b = b.status(status);
1727 }
1728
1729 let mut response = match b.body(()) {
1730 Ok(response) => response,
1731 Err(_) => {
1732 // TODO: Should there be more specialized handling for different
1733 // kinds of errors
1734 return Err(Error::library_reset(stream_id, Reason::PROTOCOL_ERROR));
1735 }
1736 };
1737
1738 *response.headers_mut() = fields;
1739
1740 Ok(response)
1741 }
1742}