pub struct CallableOptions {
pub feed: Vec<String>,
pub fetch: Vec<String>,
pub target: Vec<String>,
pub run_options: Option<RunOptions>,
pub tensor_connection: Vec<TensorConnection>,
pub feed_devices: HashMap<String, String>,
pub fetch_devices: HashMap<String, String>,
pub fetch_skip_sync: bool,
}
Expand description
Defines a subgraph in another GraphDef
as a set of feed points and nodes
to be fetched or executed.
Compare with the arguments to Session::Run()
.
Fields§
§feed: Vec<String>
Tensors to be fed in the callable. Each feed is the name of a tensor.
fetch: Vec<String>
Fetches. A list of tensor names. The caller of the callable expects a tensor to be returned for each fetch[i] (see RunStepResponse.tensor). The order of specified fetches does not change the execution order.
target: Vec<String>
Target Nodes. A list of node names. The named nodes will be run by the callable but their outputs will not be returned.
run_options: Option<RunOptions>
Options that will be applied to each run.
tensor_connection: Vec<TensorConnection>
Tensors to be connected in the callable. Each TensorConnection denotes a pair of tensors in the graph, between which an edge will be created in the callable.
feed_devices: HashMap<String, String>
The Tensor objects fed in the callable and fetched from the callable are expected to be backed by host (CPU) memory by default.
The options below allow changing that - feeding tensors backed by device memory, or returning tensors that are backed by device memory.
The maps below map the name of a feed/fetch tensor (which appears in ‘feed’ or ‘fetch’ fields above), to the fully qualified name of the device owning the memory backing the contents of the tensor.
For example, creating a callable with the following options:
CallableOptions { feed: “a:0” feed: “b:0”
fetch: “x:0” fetch: “y:0”
feed_devices: { “a:0”: “/job:localhost/replica:0/task:0/device:GPU:0” }
fetch_devices: { “y:0”: “/job:localhost/replica:0/task:0/device:GPU:0” } }
means that the Callable expects:
- The first argument (“a:0”) is a Tensor backed by GPU memory.
- The second argument (“b:0”) is a Tensor backed by host memory. and of its return values:
- The first output (“x:0”) will be backed by host memory.
- The second output (“y:0”) will be backed by GPU memory.
FEEDS: It is the responsibility of the caller to ensure that the memory of the fed tensors will be correctly initialized and synchronized before it is accessed by operations executed during the call to Session::RunCallable().
This is typically ensured by using the TensorFlow memory allocators (Device::GetAllocator()) to create the Tensor to be fed.
Alternatively, for CUDA-enabled GPU devices, this typically means that the operation that produced the contents of the tensor has completed, i.e., the CUDA stream has been synchronized (e.g., via cuCtxSynchronize() or cuStreamSynchronize()).
fetch_devices: HashMap<String, String>
§fetch_skip_sync: bool
By default, RunCallable() will synchronize the GPU stream before returning fetched tensors on a GPU device, to ensure that the values in those tensors have been produced. This simplifies interacting with the tensors, but potentially incurs a performance hit.
If this options is set to true, the caller is responsible for ensuring
that the values in the fetched tensors have been produced before they are
used. The caller can do this by invoking Device::Sync()
on the underlying
device(s), or by feeding the tensors back to the same Session using
feed_devices
with the same corresponding device name.
Trait Implementations§
Source§impl Clone for CallableOptions
impl Clone for CallableOptions
Source§fn clone(&self) -> CallableOptions
fn clone(&self) -> CallableOptions
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moreSource§impl Debug for CallableOptions
impl Debug for CallableOptions
Source§impl Default for CallableOptions
impl Default for CallableOptions
Source§impl Message for CallableOptions
impl Message for CallableOptions
Source§fn encoded_len(&self) -> usize
fn encoded_len(&self) -> usize
Source§fn encode<B>(&self, buf: &mut B) -> Result<(), EncodeError>
fn encode<B>(&self, buf: &mut B) -> Result<(), EncodeError>
Source§fn encode_length_delimited<B>(&self, buf: &mut B) -> Result<(), EncodeError>
fn encode_length_delimited<B>(&self, buf: &mut B) -> Result<(), EncodeError>
Source§fn decode<B>(buf: B) -> Result<Self, DecodeError>
fn decode<B>(buf: B) -> Result<Self, DecodeError>
Source§fn decode_length_delimited<B>(buf: B) -> Result<Self, DecodeError>
fn decode_length_delimited<B>(buf: B) -> Result<Self, DecodeError>
Source§fn merge<B>(&mut self, buf: B) -> Result<(), DecodeError>
fn merge<B>(&mut self, buf: B) -> Result<(), DecodeError>
self
. Read moreSource§fn merge_length_delimited<B>(&mut self, buf: B) -> Result<(), DecodeError>
fn merge_length_delimited<B>(&mut self, buf: B) -> Result<(), DecodeError>
self
.