[−][src]Struct tensorflow_proto::tensorflow::CallableOptions
Defines a subgraph in another GraphDef
as a set of feed points and nodes
to be fetched or executed.
Compare with the arguments to Session::Run()
.
Fields
feed: Vec<String>
Tensors to be fed in the callable. Each feed is the name of a tensor.
fetch: Vec<String>
Fetches. A list of tensor names. The caller of the callable expects a tensor to be returned for each fetch[i] (see RunStepResponse.tensor). The order of specified fetches does not change the execution order.
target: Vec<String>
Target Nodes. A list of node names. The named nodes will be run by the callable but their outputs will not be returned.
run_options: Option<RunOptions>
Options that will be applied to each run.
tensor_connection: Vec<TensorConnection>
Tensors to be connected in the callable. Each TensorConnection denotes a pair of tensors in the graph, between which an edge will be created in the callable.
feed_devices: HashMap<String, String>
The Tensor objects fed in the callable and fetched from the callable are expected to be backed by host (CPU) memory by default.
The options below allow changing that - feeding tensors backed by device memory, or returning tensors that are backed by device memory.
The maps below map the name of a feed/fetch tensor (which appears in 'feed' or 'fetch' fields above), to the fully qualified name of the device owning the memory backing the contents of the tensor.
For example, creating a callable with the following options:
CallableOptions { feed: "a:0" feed: "b:0"
fetch: "x:0" fetch: "y:0"
feed_devices: { "a:0": "/job:localhost/replica:0/task:0/device:GPU:0" }
fetch_devices: { "y:0": "/job:localhost/replica:0/task:0/device:GPU:0" } }
means that the Callable expects:
- The first argument ("a:0") is a Tensor backed by GPU memory.
- The second argument ("b:0") is a Tensor backed by host memory. and of its return values:
- The first output ("x:0") will be backed by host memory.
- The second output ("y:0") will be backed by GPU memory.
FEEDS: It is the responsibility of the caller to ensure that the memory of the fed tensors will be correctly initialized and synchronized before it is accessed by operations executed during the call to Session::RunCallable().
This is typically ensured by using the TensorFlow memory allocators (Device::GetAllocator()) to create the Tensor to be fed.
Alternatively, for CUDA-enabled GPU devices, this typically means that the operation that produced the contents of the tensor has completed, i.e., the CUDA stream has been synchronized (e.g., via cuCtxSynchronize() or cuStreamSynchronize()).
fetch_devices: HashMap<String, String>
fetch_skip_sync: bool
By default, RunCallable() will synchronize the GPU stream before returning fetched tensors on a GPU device, to ensure that the values in those tensors have been produced. This simplifies interacting with the tensors, but potentially incurs a performance hit.
If this options is set to true, the caller is responsible for ensuring
that the values in the fetched tensors have been produced before they are
used. The caller can do this by invoking Device::Sync()
on the underlying
device(s), or by feeding the tensors back to the same Session using
feed_devices
with the same corresponding device name.
Trait Implementations
impl Clone for CallableOptions
[src]
pub fn clone(&self) -> CallableOptions
[src]
pub fn clone_from(&mut self, source: &Self)
1.0.0[src]
impl Debug for CallableOptions
[src]
impl Default for CallableOptions
[src]
impl Message for CallableOptions
[src]
pub fn encode_raw<B>(&self, buf: &mut B) where
B: BufMut,
[src]
B: BufMut,
pub fn merge_field<B>(
&mut self,
tag: u32,
wire_type: WireType,
buf: &mut B,
ctx: DecodeContext
) -> Result<(), DecodeError> where
B: Buf,
[src]
&mut self,
tag: u32,
wire_type: WireType,
buf: &mut B,
ctx: DecodeContext
) -> Result<(), DecodeError> where
B: Buf,
pub fn encoded_len(&self) -> usize
[src]
pub fn clear(&mut self)
[src]
pub fn encode<B>(&self, buf: &mut B) -> Result<(), EncodeError> where
B: BufMut,
[src]
B: BufMut,
pub fn encode_length_delimited<B>(&self, buf: &mut B) -> Result<(), EncodeError> where
B: BufMut,
[src]
B: BufMut,
pub fn decode<B>(buf: B) -> Result<Self, DecodeError> where
Self: Default,
B: Buf,
[src]
Self: Default,
B: Buf,
pub fn decode_length_delimited<B>(buf: B) -> Result<Self, DecodeError> where
Self: Default,
B: Buf,
[src]
Self: Default,
B: Buf,
pub fn merge<B>(&mut self, buf: B) -> Result<(), DecodeError> where
B: Buf,
[src]
B: Buf,
pub fn merge_length_delimited<B>(&mut self, buf: B) -> Result<(), DecodeError> where
B: Buf,
[src]
B: Buf,
impl PartialEq<CallableOptions> for CallableOptions
[src]
pub fn eq(&self, other: &CallableOptions) -> bool
[src]
pub fn ne(&self, other: &CallableOptions) -> bool
[src]
impl StructuralPartialEq for CallableOptions
[src]
Auto Trait Implementations
impl RefUnwindSafe for CallableOptions
[src]
impl Send for CallableOptions
[src]
impl Sync for CallableOptions
[src]
impl Unpin for CallableOptions
[src]
impl UnwindSafe for CallableOptions
[src]
Blanket Implementations
impl<T> Any for T where
T: 'static + ?Sized,
[src]
T: 'static + ?Sized,
impl<T> Borrow<T> for T where
T: ?Sized,
[src]
T: ?Sized,
impl<T> BorrowMut<T> for T where
T: ?Sized,
[src]
T: ?Sized,
pub fn borrow_mut(&mut self) -> &mut T
[src]
impl<T> From<T> for T
[src]
impl<T, U> Into<U> for T where
U: From<T>,
[src]
U: From<T>,
impl<T> ToOwned for T where
T: Clone,
[src]
T: Clone,
type Owned = T
The resulting type after obtaining ownership.
pub fn to_owned(&self) -> T
[src]
pub fn clone_into(&self, target: &mut T)
[src]
impl<T, U> TryFrom<U> for T where
U: Into<T>,
[src]
U: Into<T>,
type Error = Infallible
The type returned in the event of a conversion error.
pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>
[src]
impl<T, U> TryInto<U> for T where
U: TryFrom<T>,
[src]
U: TryFrom<T>,