TFGENZOO.flows.utils.actnorm_activation module¶
-
class
TFGENZOO.flows.utils.actnorm_activation.
ActnormActivation
(scale: float = 1.0, logscale_factor=3.0, **kwargs)[source]¶ Bases:
tensorflow.python.keras.engine.base_layer.Layer
Actnorm Layer without inverse function
This layer cannot sync mean / variance via Multi GPU
Sources:
-
scale
¶ scaling
- Type
float
-
logscale_factor
¶ logscale_factor
- Type
float
Note
- initialize
- mean = mean(first_batch)var = variance(first-batch)logs = log(scale / sqrt(var)) / log-scale-factorbias = -mean
- forward formula (forward only)
- logs = logs * log_scale_factorscale = exp(logs)z = (x + bias) * scale
-
build
(input_shape: tensorflow.python.framework.tensor_shape.TensorShape)[source]¶ Creates the variables of the layer (optional, for subclass implementers).
This is a method that implementers of subclasses of Layer or Model can override if they need a state-creation step in-between layer instantiation and layer call.
This is typically used to create the weights of Layer subclasses.
- Parameters
input_shape – Instance of TensorShape, or list of instances of TensorShape if the layer expects a list of inputs (one instance per input).
-
call
(x: tensorflow.python.framework.ops.Tensor)[source]¶ This is where the layer’s logic lives.
- Parameters
inputs – Input tensor, or list/tuple of input tensors.
**kwargs – Additional keyword arguments.
- Returns
A tensor or list/tuple of tensors.
-
get_config
()[source]¶ Returns the config of the layer.
A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.
The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).
- Returns
Python dictionary.
-