[][src]Module tensorflow_proto::xla

Modules

buffer_allocation_proto
buffer_assignment_proto
channel_handle
debug_options
device_assignment_proto
dynamic_parameter_binding_proto
gpu
heap_simulator_trace
hlo_input_output_alias_proto
hlo_instruction_proto
hlo_profile_printer_data
hlo_reduce_precision_options
hlo_schedule_proto
logical_buffer_proto
op_sharding
padding_config
precision_config
triangular_solve_options
while_loop_backend_config

Structs

BufferAllocationProto

Serialization of BufferAllocation.

BufferAssignmentProto

Serialization of BufferAssignment.

ChannelHandle

Handle given to a user to represent a channel between two computations via a Send and Recv instruction pair. Channels are unbuffered, so Send Send instructions will be blocked until the data is transferred.

CholeskyOptions
CompileRequest
CompileResponse
ComputationGraphStatsRequest
ComputationStats

Statistics of a computation.

ComputationStatsResponse
ComputeConstantGraphRequest
ComputeConstantResponse
ConvolutionDimensionNumbers
CreateChannelHandleRequest
CreateChannelHandleResponse
DebugOptions

Debugging options for XLA. These options may change at any time - there are no guarantees about backward or forward compatibility for these fields.

DeconstructTupleRequest
DeconstructTupleResponse
DeviceAssignmentProto

DeviceAssignmentProto is a serialized form of DeviceAssignment class, which represents the device ids assigned to a set of replicated computations. See xla::DeviceAssignment class comment for more details.

DeviceHandle

Handle given to a user that represents a replicated virtual device. Each replicated device represents N physical devices for execution where N is the number of replicas.

DotDimensionNumbers
DynamicParameterBindingProto
ExecuteGraphParallelRequest
ExecuteGraphRequest

TODO(b/118493728): Remove this and ExecuteGraphParallelRequest and replace the uses with calls to Compile and Execute.

ExecuteParallelResponse
ExecuteRequest
ExecuteResponse
ExecutionHandle

Handle given to a user that represents an execution that the user launched asynchronously on the device.

ExecutionOptions

These settings control how XLA compiles and/or runs code. Not all settings will have an effect on every platform.

ExecutionProfile

Profile data from the execution of a computation.

GatherDimensionNumbers

Describes the dimension numbers for a gather operation.

GetDeviceHandlesRequest
GetDeviceHandlesResponse
GetShapeRequest
GetShapeResponse
GlobalDataHandle

Handle given to a user that represents a globally accessible allocation. Contrast this against a ComputationDataHandle, which is not globally accessible, since it only exists within a specific computation.

HeapSimulatorTrace

A trace of a HeapSimulator run.

HloComputationProto

Serialization of HloComputation.

HloInputOutputAliasProto
HloInstructionProto

Serialization of HloInstruction. Next ID: 68

HloModuleGroupProto

An abstraction representing a set of HLO module built to run concurrently across different devices.

HloModuleProto

Serialization of HloModule.

HloProfilePrinterData

Describes how to pretty-print a profile counter array gathered for a specific HloModule.

HloProto

Grouping message that contains all of the information above.

HloReducePrecisionOptions

Options for the HLO insert-reduce-precision-operations pass.

HloScheduleProto

Serialization of an HLO schedule. An HLO schedule contains a total order of instructions for each non-fusion computation in the module.

HloSnapshot

Encapsulates HloProto together with the arguments, result, and execution_platform. This message is used for purposes such as analysis/replay/file-storage.

LayoutProto

A layout describes how the array is placed in (1D) memory space. This includes the minor-to-major ordering of dimensions within a shape.

LiteralProto

Literals are used when the server and client need to exchange materialized data / results. Literals are also used to describe constants used in computations.

LoadDataRequest
LoadDataResponse
LogicalBufferProto

Serialization of LogicalBuffer.

OpMetadata

Symbolization metadata for HLO Instructions.

OpSharding
PaddingConfig

Describes the padding configuration for Pad operation. The padding amount on both edges as well as between the elements are specified for each dimension.

ParameterReplication

Describes whether all data-parallelism replicas will receive the same parameter data at each buffer.

PrecisionConfig

Used to indicate the precision configuration. It has backend specific meaning.

ProgramShapeProto

Shape of the parameters and output of a computation (like a traditional function signature).

ReplicaGroup

Describes the replica groups in a cross replica op (e.g., all-reduce and all-to-all).

ResetDeviceRequest
ResetDeviceResponse
ScatterDimensionNumbers

Describes the dimension numbers for a scatter operation.

ShapeProto

A shape describes the number of dimensions in the array, the size of each dimension, and the primitive component type.

SourceTarget

Describes the source target pair in the collective permute op.

TileProto

Describes a tile used in tiling-based layout. Refer to g3doc/third_party/tensorflow/compiler/xla/g3doc/layout_with_tiling.md for details about tiling-based layout.

TransferFromOutfeedRequest
TransferFromOutfeedResponse
TransferToClientRequest
TransferToClientResponse
TransferToInfeedRequest
TransferToInfeedResponse
TransferToServerRequest
TransferToServerResponse
TriangularSolveOptions
UnpackRequest
UnpackResponse
UnregisterRequest
UnregisterResponse
WaitForExecutionRequest
WaitForExecutionResponse
WhileLoopBackendConfig

A backend-config for kWhile loops that stores the loop's trip count, if it is known.

Window

Describes the windowing in an operation such as convolution.

WindowDimension

Enums

FftType
Format

A format specifies the method used by a layout to store an array in memory.

PrimitiveType

Primitive types are the individual values that can be held in rectangular multidimensional arrays. A description of the rectangular multidimensional array dimensions / primitive type is given by Shape, below.

RandomDistribution