[−][src]Module tensorflow_proto::tensorflow::tpu
Modules
Structs
AdadeltaParameters | https://www.tensorflow.org/api_docs/python/tf/train/AdadeltaOptimizer https://github.com/tensorflow/tensorflow/blob/c19e29306ce1777456b2dbb3a14f511edf7883a8/tensorflow/core/kernels/training_ops.cc#L68 |
AdagradParameters | https://www.tensorflow.org/api_docs/python/tf/train/AdagradOptimizer https://github.com/tensorflow/tensorflow/blob/c19e29306ce1777456b2dbb3a14f511edf7883a8/tensorflow/core/kernels/training_ops.cc#L151 |
AdamParameters | The Adam optimizer does not implement hyper-parameter update; use the dynamic learning rate feature instead, setting the learning rate to: user learning_rate * sqrt(1 - beta2^t) / (1 - beta1^t) Here, t is the current timestep. |
BoundedAdagradParameters | Algorithm in http://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf. |
CenteredRmsPropParameters | https://www.tensorflow.org/api_docs/python/tf/train/RMSPropOptimizer https://github.com/tensorflow/tensorflow/blob/c19e29306ce1777456b2dbb3a14f511edf7883a8/tensorflow/core/kernels/training_ops.cc#L372 |
ClippingLimits | |
CompilationResultProto | Describes the result of a TPU compilation. |
DynamicLearningRate | Dynamic learning rate specification in the TPUEmbeddingConfiguration. The actual learning rates are provided as a scalar input list to the SendTPUEmbeddingGradients Op indexed by their tag specified through the following proto. |
FtrlParameters | https://www.tensorflow.org/api_docs/python/tf/train/FtrlOptimizer https://github.com/tensorflow/tensorflow/blob/c19e29306ce1777456b2dbb3a14f511edf7883a8/tensorflow/core/kernels/training_ops.cc#L192 |
GradientAccumulationStatus | Status of using gradient accumulation (doing two passes over the input gradients: one to accumulate them into a temporary array and another to apply them using the actual optimization algorithm). The extra message is to wrap the enum for scoping. |
HotIdReplicationConfiguration | Configuration proto for hot ID optimization. This is an experimental feature that is currently disabled (by default). |
LearningRate | Source of learning rate to use. |
MdlAdagradLightParameters | Variant of algorithm in http://proceedings.mlr.press/v44/shamir15.pdf |
MomentumParameters | https://www.tensorflow.org/api_docs/python/tf/train/MomentumOptimizer https://github.com/tensorflow/tensorflow/blob/c19e29306ce1777456b2dbb3a14f511edf7883a8/tensorflow/core/kernels/training_ops.cc#L271 |
OnlineYogiParameters | The online Yogi optimizer does not implement hyper-parameter update; use the dynamic learning rate feature instead, setting the learning rate to: user learning_rate * sqrt(1 - beta2^t) / (1 - beta1^t) Here, t is the current timestep. |
OptimizationParameters | |
PaddingMap | A mapping between the dynamic shape dimension of an input and the arg that represents the real shape. |
ProximalAdagradParameters | https://www.tensorflow.org/api_docs/python/tf/train/ProximalAdagradOptimizer https://github.com/tensorflow/tensorflow/blob/c19e29306ce1777456b2dbb3a14f511edf7883a8/tensorflow/core/kernels/training_ops.cc#L164 |
RmsPropParameters | https://www.tensorflow.org/api_docs/python/tf/train/RMSPropOptimizer https://github.com/tensorflow/tensorflow/blob/c19e29306ce1777456b2dbb3a14f511edf7883a8/tensorflow/core/kernels/training_ops.cc#L356 |
StateVariableSpecification | Specification of an optimization algorithm's state variables (both the main value vector and any extra accumulators, etc.). This proto is only used internally by the TPU software and is not exposed directly to the TF model. |
StochasticGradientDescentParameters | https://www.tensorflow.org/api_docs/python/tf/train/GradientDescentOptimizer https://github.com/tensorflow/tensorflow/blob/c19e29306ce1777456b2dbb3a14f511edf7883a8/tensorflow/core/kernels/training_ops.cc#L423 |
TopologyProto | Describes the geometry of a TPU mesh. |
TpuEmbeddingConfiguration | |
TpuEmbeddingOutputLayout |