TFGENZOO.flows.cond_affine_coupling module¶
-
class
TFGENZOO.flows.cond_affine_coupling.
ConditionalAffineCoupling
(mask_type: TFGENZOO.flows.affine_coupling.AffineCouplingMask = <AffineCouplingMask.ChannelWise: 1>, scale_shift_net: tensorflow.python.keras.engine.base_layer.Layer = None, scale_shift_net_template: Callable[[tensorflow.python.keras.engine.input_layer.Input], tensorflow.python.keras.engine.training.Model] = None, scale_type='safe_exp', **kwargs)[source]¶ Bases:
TFGENZOO.flows.flowbase.FlowComponent
Affine Coupling Layer
Note
- forward formula
- [x1, x2] = split(x)log_scale, shift = NN([x1, c])scale = sigmoid(log_scale + 2.0)z1 = x1z2 = (x2 + shift) * scalez = concat([z1, z2])LogDetJacobian = sum(log(scale))
- inverse formula
- [z1, z2] = split(x)log_scale, shift = NN([z1, c])scale = sigmoid(log_scale + 2.0)x1 = z1x2 = z2 / scale - shiftz = concat([x1, x2])InverseLogDetJacobian = - sum(log(scale))
- implementation notes
- in Glow’s Paper, scale is calculated by exp(log_scale),but IN IMPLEMENTATION, scale is done by sigmoid(log_scale + 2.0)where c is the conditional input for WaveGlow or cINN
- TODO notes
- cINN uses double coupling, but our coupling is single coupling
-
build
(input_shape: tensorflow.python.framework.tensor_shape.TensorShape)[source]¶ Creates the variables of the layer (optional, for subclass implementers).
This is a method that implementers of subclasses of Layer or Model can override if they need a state-creation step in-between layer instantiation and layer call.
This is typically used to create the weights of Layer subclasses.
- Parameters
input_shape – Instance of TensorShape, or list of instances of TensorShape if the layer expects a list of inputs (one instance per input).
-
forward
(x: tensorflow.python.framework.ops.Tensor, cond: tensorflow.python.framework.ops.Tensor, **kwargs)[source]¶
-
get_config
()[source]¶ Returns the config of the layer.
A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.
The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).
- Returns
Python dictionary.
-
class
TFGENZOO.flows.cond_affine_coupling.
ConditionalAffineCoupling2DWithMask
(mask_type: TFGENZOO.flows.affine_coupling.AffineCouplingMask = <AffineCouplingMask.ChannelWise: 1>, scale_shift_net: tensorflow.python.keras.engine.base_layer.Layer = None, scale_shift_net_template: Callable[[tensorflow.python.keras.engine.input_layer.Input], tensorflow.python.keras.engine.training.Model] = None, scale_type='safe_exp', **kwargs)[source]¶ Bases:
TFGENZOO.flows.cond_affine_coupling.ConditionalAffineCouplingWithMask
-
build
(input_shape: tensorflow.python.framework.tensor_shape.TensorShape)[source]¶ Creates the variables of the layer (optional, for subclass implementers).
This is a method that implementers of subclasses of Layer or Model can override if they need a state-creation step in-between layer instantiation and layer call.
This is typically used to create the weights of Layer subclasses.
- Parameters
input_shape – Instance of TensorShape, or list of instances of TensorShape if the layer expects a list of inputs (one instance per input).
-
-
class
TFGENZOO.flows.cond_affine_coupling.
ConditionalAffineCouplingWithMask
(mask_type: TFGENZOO.flows.affine_coupling.AffineCouplingMask = <AffineCouplingMask.ChannelWise: 1>, scale_shift_net: tensorflow.python.keras.engine.base_layer.Layer = None, scale_shift_net_template: Callable[[tensorflow.python.keras.engine.input_layer.Input], tensorflow.python.keras.engine.training.Model] = None, scale_type='safe_exp', **kwargs)[source]¶ Bases:
TFGENZOO.flows.cond_affine_coupling.ConditionalAffineCoupling
Conditional Affine Coupling Layer with mask
Note
- forward formula
- [x1, x2] = split(x)log_scale, shift = NN([x1, c])scale = exp(log_scale)z1 = x1z2 = (x2 + shift) * scalez = concat([z1, z2])LogDetJacobian = sum(log(scale))
- inverse formula
- [z1, z2] = split(x)log_scale, shift = NN([z1, c])scale = exp(log_scale)x1 = z1x2 = z2 / scale - shiftz = concat([x1, x2])InverseLogDetJacobian = - sum(log(scale))
- implementation notes
- in Glow’s Paper, scale is calculated by exp(log_scale),but IN IMPLEMENTATION, scale is done by sigmoid(log_scale + 2.0)where c is the conditional input for WaveGlow or cINN
- TODO notes
- cINN uses double coupling, but our coupling is single couplingscale > 0 because exp(x) > 0
- mask notes
- mask shape is [B, T, M] where M may be 1reference glow-tts
-
build
(input_shape: tensorflow.python.framework.tensor_shape.TensorShape)[source]¶ Creates the variables of the layer (optional, for subclass implementers).
This is a method that implementers of subclasses of Layer or Model can override if they need a state-creation step in-between layer instantiation and layer call.
This is typically used to create the weights of Layer subclasses.
- Parameters
input_shape – Instance of TensorShape, or list of instances of TensorShape if the layer expects a list of inputs (one instance per input).
-
forward
(x: tensorflow.python.framework.ops.Tensor, cond: tensorflow.python.framework.ops.Tensor, mask: tensorflow.python.framework.ops.Tensor = None, **kwargs)[source]¶ - Parameters
x (tf.Tensor) – base input tensor [B, T, C]
cond (tf.Tensor) – conditional input tensor [B, T, C’]
mask (tf.Tensor) – mask input tensor [B, T, M] where M may be 1
- Returns
latent variable tensor [B, T, C] ldj (tf.Tensor): log det jacobian [B]
- Return type
z (tf.Tensor)
Note
- mask’s example
- [[True, True, True, False],[True, False, False, False],[True, True, True, True],[True, True, True, True]]