Expand description
Support for shielding for Compute services
This module should allow authors of Compute services to accomplish “origin offload”, or the ability to ensure that origin requests come from a more limited set of Fastly nodes. Because this is Compute, the implementation provides a bit more flexibility than some of the corresponding VCL alternatives, but with that flexibility comes the need for a bit more care.
In Compute, a shield is “just another Backend”: the compute service make HTTP requests
to the shield, and receives responses. The service can cache, combine, or rewrite those
responses as usual.
This means (for example) Compute programs can choose to offload traffic to two different locations (shields), should data be stored in two different cloud storage providers, and then combine the data dynamically at a POP close to the user.
A cost of this flexibility is that some care needs to be taken when considering caching, especially for data that you may wish to purge.
For example, the following is a likely common pattern for writing systems that
use the Shield API:
use fastly::{Error, Request, Response};
use fastly::shielding::Shield;
use fastly::http::StatusCode;
#[fastly::main]
pub fn main(req: Request) -> Result<Response, Error> {
let shield = match Shield::new("<YOUR SHIELD POP>") {
Ok(v) => v,
Err(e) => return Ok(Response::from_status(StatusCode::INTERNAL_SERVER_ERROR)
.with_body(format!("Could not find shield '<YOUR SHIELD POP>': {:?}", e))),
};
if !shield.running_on() {
// If we're not running on the shield POP, look up the encrypted
// tunnel to that backend, and try to forward it on.
let response = match shield.encrypted_backend() {
Err(e) => Ok(
Response::from_status(StatusCode::INTERNAL_SERVER_ERROR).with_body(format!(
"Could not convert shield <YOUR SHIELD HERE> into an encrypted backend: {:?}",
e
)),
),
Ok(backend) => {
// Think very hard about caching right here.
match req.send(backend) {
Err(e) => Ok(Response::from_status(StatusCode::INTERNAL_SERVER_ERROR)
.with_body(format!("Could not send request to shield: {}", e))),
Ok(resp) => {
// If you want to add a header somewhere to mark that you got this
// from a shield, that can be handy for debugging, and this would
// be the place to do it.
Ok(resp)
}
}
}
};
return response;
}
// if we get here, then we're running on the shield!
//
// <complicated stuff>
//
// This value just here to make the example compile
Ok(Response::from_status(StatusCode::OK))
}In some cases, where the comment says “Think very hard about caching right here”, you should do nothing. For example, if you don’t plan to purge the data being returned by the shield node, this will be a great option, as your data will be cached twice: once at the client’s local POP, and then again at the shield POP.
The difficulty comes if you may want to do purges in the future. In that case,
you should think carefully about the cache keys that will be generated between
(a) the edge host and the shield host, and (b) the shield host and the origin.
In some cases, those cache keys will be identical, and a purge will affect both
the edge and shield nodes equally. However, there are several mechanisms you could
be using that could negate this, so that the cache keys for (a) and (b) are different.
In those cases, you may want to consider the use of surrogate keys (to connect
the two requests under the same key), or other mechanisms (like crate::Request::set_before_send)
that can adjust your queries so that the cache keys match, to mitigate these problems.
You may also be interested in looking at our general discussion on shielding and purging.
Regardless, it is worth thinking carefully about various caching trade-offs that are available to you via this API, particularly if purging is an important feature for you.
Structs§
- Shield
- A structure representing a shielding site within Fastly.