fastly 0.12.0

Fastly Compute API
Documentation
//! Support for shielding for Compute services
//!
//! This module should allow authors of Compute services to accomplish "origin offload",
//! or the ability to ensure that origin requests come from a more limited set of Fastly
//! nodes. Because this is Compute, the implementation provides a bit more flexibility
//! than some of the corresponding VCL alternatives, but with that flexibility comes the
//! need for a bit more care.
//!
//! In Compute, a shield is "just another [`Backend`]": the compute service make HTTP requests
//! to the shield, and receives responses. The service can cache, combine, or rewrite those
//! responses as usual.
//!
//! This means (for example) Compute programs can choose to offload traffic to
//! two different locations (shields), should data be stored in two different cloud storage
//! providers, and then combine the data dynamically at a POP close to the user.
//!
//! A cost of this flexibility is that some care needs to be taken when considering
//! caching, especially for data that you may wish to purge.
//!
//! For example, the following is a likely common pattern for writing systems that
//! use the `Shield` API:
//!
//! ```no_run
//! use fastly::{Error, Request, Response};
//! use fastly::shielding::Shield;
//! use fastly::http::StatusCode;
//!
//! #[fastly::main]
//! pub fn main(req: Request) -> Result<Response, Error> {
//!   let shield = match Shield::new("<YOUR SHIELD POP>") {
//!     Ok(v) => v,
//!     Err(e) => return Ok(Response::from_status(StatusCode::INTERNAL_SERVER_ERROR)
//!         .with_body(format!("Could not find shield '<YOUR SHIELD POP>': {:?}", e))),
//!   };
//!
//!   if !shield.running_on() {
//!     // If we're not running on the shield POP, look up the encrypted
//!     // tunnel to that backend, and try to forward it on.
//!     let response = match shield.encrypted_backend() {
//!       Err(e) => Ok(
//!           Response::from_status(StatusCode::INTERNAL_SERVER_ERROR).with_body(format!(
//!               "Could not convert shield <YOUR SHIELD HERE> into an encrypted backend: {:?}",
//!               e
//!           )),
//!       ),
//!     
//!       Ok(backend) => {
//!         // Think very hard about caching right here.
//!         match req.send(backend) {
//!           Err(e) => Ok(Response::from_status(StatusCode::INTERNAL_SERVER_ERROR)
//!              .with_body(format!("Could not send request to shield: {}", e))),
//!           
//!           Ok(resp) => {
//!             // If you want to add a header somewhere to mark that you got this
//!             // from a shield, that can be handy for debugging, and this would
//!             // be the place to do it.
//!             Ok(resp)
//!           }
//!         }
//!       }
//!     };
//!     
//!     return response;
//!   }
//!
//!   // if we get here, then we're running on the shield!
//!   //
//!   // <complicated stuff>
//!   //
//!   // This value just here to make the example compile
//!   Ok(Response::from_status(StatusCode::OK))
//! }
//! ```
//!
//! In some cases, where the comment says "Think very hard about caching right here",
//! you should do nothing. For example, if you don't plan to purge the data being
//! returned by the shield node, this will be a great option, as your data will be
//! cached twice: once at the client's local POP, and then again at the shield POP.
//!
//! The difficulty comes if you may want to do purges in the future. In that case,
//! you should think carefully about the cache keys that will be generated between
//! (a) the edge host and the shield host, and (b) the shield host and the origin.
//! In some cases, those cache keys will be identical, and a purge will affect both
//! the edge and shield nodes equally. However, there are several mechanisms you could
//! be using that could negate this, so that the cache keys for (a) and (b) are different.
//! In those cases, you may want to consider the use of surrogate keys (to connect
//! the two requests under the same key), or other mechanisms (like [`crate::Request::set_before_send`])
//! that can adjust your queries so that the cache keys match, to mitigate these problems.
//!
//! You may also be interested in looking at [our general discussion on shielding and
//! purging](https://www.fastly.com/documentation/guides/concepts/edge-state/cache/purging/#shielding).
//!
//! Regardless, it is worth thinking carefully about various caching trade-offs
//! that are available to you via this API, particularly if purging is an important
//! feature for you.
use std::time::Duration;

use crate::Backend;
use fastly_shared::FastlyStatus;
use fastly_sys::fastly_shielding;

const MAXIMUM_BACKEND_NAME_LENGTH: usize = 1024;

/// A structure representing a shielding site within Fastly.
pub struct Shield {
    plain_target: String,
    ssl_target: String,
    first_byte_timeout: Option<Duration>,
    is_me: bool,
}

impl Shield {
    /// Load information about the given shield.
    ///
    /// Returns an object representing the shield if it is active, or an error if
    /// the string is malformed or the shield doesn't exist.
    ///
    /// Shield names are defined [on this
    /// webpage](https://www.fastly.com/documentation/guides/concepts/shielding/#shield-locations),
    /// in the "shield code" column. For example, the string "pdx-or-us" will look
    /// up our Portland, OR, USA shield site, while "paris-fr" will look up our Paris
    /// site.
    ///
    /// If you are using a major cloud provider for your primary origin site, consider
    /// looking at the "Recommended for" column, to find the Fastly POP most closely
    /// located to the given cloud provider.
    pub fn new<S: AsRef<str>>(name: S) -> Result<Self, FastlyStatus> {
        let name_bytes = name.as_ref().as_bytes();
        let mut out_buffer_size = 1024;

        let out_buffer = loop {
            let mut out_buffer = vec![0; out_buffer_size];
            let mut used_amt = 0;

            let result = unsafe {
                fastly_shielding::shield_info(
                    name_bytes.as_ptr(),
                    name_bytes.len(),
                    out_buffer.as_mut_ptr(),
                    out_buffer_size,
                    &mut used_amt,
                )
            };

            match result {
                FastlyStatus::OK => {
                    out_buffer.resize(used_amt as usize, 0);
                    break out_buffer;
                }

                FastlyStatus::BUFLEN => {
                    out_buffer_size *= 2;
                }

                _ => return Err(result),
            }
        };

        if out_buffer.len() < 3 {
            return Err(FastlyStatus::ERROR);
        }

        let is_me = out_buffer[0] != 0;
        let mut strings = out_buffer[1..].split(|c| *c == 0);
        let plain_bytes = strings.next().ok_or(FastlyStatus::ERROR)?;
        let ssl_bytes = strings.next().ok_or(FastlyStatus::ERROR)?;
        // because the buffer ends in a null, we should end up with
        // one blank string, and then the end of the iterator
        let empty = strings.next().ok_or(FastlyStatus::ERROR)?;
        if !empty.is_empty() {
            return Err(FastlyStatus::ERROR);
        }
        if strings.next().is_some() {
            return Err(FastlyStatus::ERROR);
        }

        let plain_target =
            String::from_utf8(plain_bytes.to_vec()).map_err(|_| FastlyStatus::ERROR)?;
        let ssl_target = String::from_utf8(ssl_bytes.to_vec()).map_err(|_| FastlyStatus::ERROR)?;
        Ok(Shield {
            is_me,
            plain_target,
            ssl_target,
            first_byte_timeout: None,
        })
    }

    /// Returns whether we are currently operating on the given shield.
    ///
    /// Technically, this may also return true in very isolated incidents in which Fastly is
    /// routing traffic from the target shield POP to the POP that this code is running on, but in
    /// these situations the results should be approximately identical.
    ///
    /// (For example, it may be the case that you are asking to shield to 'pdx-or-us'. But, for
    /// load balancing, performance, or other reasons, Fastly is temporarily shifting shielding
    /// traffic from Portland to Seattle. In that case, this function may return true for hosts
    /// running on 'bfi-wa-us', our Seattle site, because effectively the shield has moved to that
    /// location. This should give you a slightly faster experience than the alternative, in which
    /// this function would return false, you would try to forward your traffic to the Portland
    /// site, and then that traffic would be caught and redirected back to Seattle.)
    pub fn running_on(&self) -> bool {
        self.is_me
    }

    /// Creates a copy of this Shield with the first-byte timeout configured.
    ///
    /// The configured first-byte timeout will apply to any backends derived from
    /// the returned Shield ([`Shield::encrypted_backend`] or [`Shield::unencrypted_backend`]).
    /// See [`BackendBuilder::set_first_byte_timeout`].
    ///
    /// [`BackendBuilder::set_first_byte_timeout`]: crate::backend::BackendBuilder::first_byte_timeout
    #[must_use]
    pub fn with_first_byte_timeout(&self, timeout: Duration) -> Self {
        Self {
            first_byte_timeout: Some(timeout),
            is_me: self.is_me,
            plain_target: self.plain_target.clone(),
            ssl_target: self.ssl_target.clone(),
        }
    }

    /// Returns a Backend representing an unencrypted connetion to the POP.
    ///
    /// Generally speaking, we encourage users to use [`Shield::encrypted_backend`]
    /// instead of this function. Data sent over this backend -- the unencrypted
    /// version -- will be sent over the open internet, with no protections. In
    /// most cases, this is not what you want. However, in some cases -- such as
    /// when you want to ship large data blobs that you know are already encrypted
    /// --- using these backends can prevent a double-encryption performance
    /// penalty.
    pub fn unencrypted_backend(&self) -> Result<Backend, FastlyStatus> {
        self.backend_builder(false).finish()
    }

    /// Returns a Backend representing an encrypted connection to the POP.
    ///
    /// For reference, this is almost always the backend that you want to use. Only
    /// use [`Shield::unencrypted_backend`] in situations in which you are 100% sure
    /// that all the data you will send and receive over the backend is already
    /// encrypted.
    pub fn encrypted_backend(&self) -> Result<Backend, FastlyStatus> {
        self.backend_builder(true).finish()
    }

    fn backend_builder(&self, encrypt_data: bool) -> ShieldBackendBuilder<'_> {
        ShieldBackendBuilder {
            _originating_shield: self,
            chosen_backend: if encrypt_data {
                self.ssl_target.as_str()
            } else {
                self.plain_target.as_str()
            },
            cache_key: None,
        }
    }
}

struct ShieldBackendBuilder<'a> {
    _originating_shield: &'a Shield,
    chosen_backend: &'a str,
    cache_key: Option<String>,
}

impl ShieldBackendBuilder<'_> {
    /// Convert this builder into its final backend form, or return an error if
    /// something has gone wrong.
    pub fn finish(self) -> Result<Backend, FastlyStatus> {
        use fastly_shielding::{backend_for_shield, ShieldBackendConfig, ShieldBackendOptions};
        let name_bytes = self.chosen_backend.as_bytes();
        let name_len = name_bytes.len();
        let mut options_mask = ShieldBackendOptions::default();
        let mut options = ShieldBackendConfig::default();
        let mut backend_name_buffer = vec![0; MAXIMUM_BACKEND_NAME_LENGTH];
        let mut final_backend_name_len = 0;

        if let Some(cache_key) = self.cache_key.as_deref() {
            options_mask.insert(ShieldBackendOptions::CACHE_KEY);
            options.cache_key = cache_key.as_ptr();
            options.cache_key_len = cache_key.len() as u32;
        }

        if let Some(timeout_ms) = self._originating_shield.first_byte_timeout {
            options_mask.insert(ShieldBackendOptions::FIRST_BYTE_TIMEOUT);
            options.first_byte_timeout_ms = timeout_ms.as_millis().try_into().unwrap_or(u32::MAX);
        }

        let result = unsafe {
            backend_for_shield(
                name_bytes.as_ptr(),
                name_len,
                options_mask,
                &options,
                backend_name_buffer.as_mut_ptr(),
                MAXIMUM_BACKEND_NAME_LENGTH,
                &mut final_backend_name_len,
            )
        };

        if result != FastlyStatus::OK {
            return Err(result);
        }

        backend_name_buffer.resize(final_backend_name_len as usize, 0);
        let backend_name =
            String::from_utf8(backend_name_buffer).map_err(|_| FastlyStatus::ERROR)?;

        Backend::from_name(&backend_name).map_err(|_| FastlyStatus::ERROR)
    }
}