TFGENZOO.flows.utils package

Module contents

class TFGENZOO.flows.utils.Conv2D(width: int = None, width_scale: int = 1, kernel_size: Tuple[int, int] = (3, 3), stride: Tuple[int, int] = (1, 1), padding: str = 'SAME', do_actnorm: bool = True, do_weightnorm: bool = False, initializer: tensorflow.python.ops.init_ops_v2.Initializer = <tensorflow.python.ops.init_ops_v2.RandomNormal object>, bias_initializer: tensorflow.python.ops.init_ops_v2.Initializer = 'zeros')[source]

Bases: tensorflow.python.keras.engine.base_layer.Layer

Convolution layer for NHWC image

Sources:

Note

this layer applies

  • data-dependent normalization (actnorm, openai’s Glow)

  • weight normalization for stable training

this layer not implemented.

  • function add_edge_padding

ref. https://github.com/openai/glow/blob/master/tfops.py#L203-L232

build(input_shape: tensorflow.python.framework.tensor_shape.TensorShape)[source]

Creates the variables of the layer (optional, for subclass implementers).

This is a method that implementers of subclasses of Layer or Model can override if they need a state-creation step in-between layer instantiation and layer call.

This is typically used to create the weights of Layer subclasses.

Parameters

input_shape – Instance of TensorShape, or list of instances of TensorShape if the layer expects a list of inputs (one instance per input).

call(x: tensorflow.python.framework.ops.Tensor)[source]

This is where the layer’s logic lives.

Parameters
  • inputs – Input tensor, or list/tuple of input tensors.

  • **kwargs – Additional keyword arguments.

Returns

A tensor or list/tuple of tensors.

get_config()[source]

Returns the config of the layer.

A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.

The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).

Returns

Python dictionary.

class TFGENZOO.flows.utils.Conv2DZeros(width: int = None, width_scale: int = 1, kernel_size: Tuple[int, int] = 3, 3, stride: Tuple[int, int] = 1, 1, padding: str = 'SAME', logscale_factor: float = 3.0, initializer: tensorflow.python.ops.init_ops_v2.Initializer = 'zeros')[source]

Bases: tensorflow.python.keras.engine.base_layer.Layer

Convolution layer for NHWC image with zero initialization Sources:

build(input_shape: tensorflow.python.framework.tensor_shape.TensorShape)[source]

Creates the variables of the layer (optional, for subclass implementers).

This is a method that implementers of subclasses of Layer or Model can override if they need a state-creation step in-between layer instantiation and layer call.

This is typically used to create the weights of Layer subclasses.

Parameters

input_shape – Instance of TensorShape, or list of instances of TensorShape if the layer expects a list of inputs (one instance per input).

call(x: tensorflow.python.framework.ops.Tensor)[source]

This is where the layer’s logic lives.

Parameters
  • inputs – Input tensor, or list/tuple of input tensors.

  • **kwargs – Additional keyword arguments.

Returns

A tensor or list/tuple of tensors.

get_config()[source]

Returns the config of the layer.

A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.

The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).

Returns

Python dictionary.

class TFGENZOO.flows.utils.Conv1DZeros(width: int = None, width_scale: int = 2, kernel_size: int = 3, stride: int = 1, padding: str = 'SAME', logscale_factor: float = 3.0, initializer: tensorflow.python.ops.init_ops_v2.Initializer = 'zeros')[source]

Bases: tensorflow.python.keras.engine.base_layer.Layer

Convolution layer for NTC text/audio with zero initialization

Sources:

Examples

>>> import tensorflow as tf
>>> from TFGENZOO.flows.utils.conv_zeros import Conv1DZeros
>>> c1z = Conv1DZeros(width_scale = 2)
>>> x = tf.keras.layers.Input([None, 32]) # [B, T, C] where T is time-step and C is hidden-depth
>>> y = clz(y) # [B, T, C * 2]
build(input_shape: tensorflow.python.framework.tensor_shape.TensorShape)[source]

Creates the variables of the layer (optional, for subclass implementers).

This is a method that implementers of subclasses of Layer or Model can override if they need a state-creation step in-between layer instantiation and layer call.

This is typically used to create the weights of Layer subclasses.

Parameters

input_shape – Instance of TensorShape, or list of instances of TensorShape if the layer expects a list of inputs (one instance per input).

call(x: tensorflow.python.framework.ops.Tensor)[source]

This is where the layer’s logic lives.

Parameters
  • inputs – Input tensor, or list/tuple of input tensors.

  • **kwargs – Additional keyword arguments.

Returns

A tensor or list/tuple of tensors.

get_config()[source]

Returns the config of the layer.

A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.

The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).

Returns

Python dictionary.

class TFGENZOO.flows.utils.ActnormActivation(scale: float = 1.0, logscale_factor=3.0, **kwargs)[source]

Bases: tensorflow.python.keras.engine.base_layer.Layer

Actnorm Layer without inverse function

This layer cannot sync mean / variance via Multi GPU

Sources:

scale

scaling

Type

float

logscale_factor

logscale_factor

Type

float

Note

  • initialize
    mean = mean(first_batch)
    var = variance(first-batch)
    logs = log(scale / sqrt(var)) / log-scale-factor
    bias = -mean
  • forward formula (forward only)
    logs = logs * log_scale_factor
    scale = exp(logs)
    z = (x + bias) * scale
build(input_shape: tensorflow.python.framework.tensor_shape.TensorShape)[source]

Creates the variables of the layer (optional, for subclass implementers).

This is a method that implementers of subclasses of Layer or Model can override if they need a state-creation step in-between layer instantiation and layer call.

This is typically used to create the weights of Layer subclasses.

Parameters

input_shape – Instance of TensorShape, or list of instances of TensorShape if the layer expects a list of inputs (one instance per input).

call(x: tensorflow.python.framework.ops.Tensor)[source]

This is where the layer’s logic lives.

Parameters
  • inputs – Input tensor, or list/tuple of input tensors.

  • **kwargs – Additional keyword arguments.

Returns

A tensor or list/tuple of tensors.

data_dep_initialize(x: tensorflow.python.framework.ops.Tensor)[source]
get_config()[source]

Returns the config of the layer.

A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.

The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).

Returns

Python dictionary.

TFGENZOO.flows.utils.gaussian_likelihood(mean: tensorflow.python.framework.ops.Tensor, logsd: tensorflow.python.framework.ops.Tensor, x: tensorflow.python.framework.ops.Tensor)[source]

calculate negative log likelihood of Gaussian Distribution.

Parameters
  • mean (tf.Tensor) – mean [B, …]

  • logsd (tf.Tensor) – log standard deviation [B, …]

  • x (tf.Tensor) – tensor [B, …]

Returns

log likelihood [B, …]

Return type

ll (tf.Tensor)

Note

\begin{align} ll &= - \cfrac{1}{2} (k\log(2 \pi) + \log |Var| \\ &+ (x - Mu)^T (Var ^ {-1}) (x - Mu))\\ ,\ where & \\ & k = 1\ (Independent)\\ & Var\ is\ a\ variance = exp(2 logsd) \end{align}
TFGENZOO.flows.utils.gaussian_sample(mean: tensorflow.python.framework.ops.Tensor, logsd: tensorflow.python.framework.ops.Tensor, temparature: float = 1.0)[source]

sampling from mean, logsd * temparature

Parameters
  • mean (tf.Tensor) – mean [B, …]

  • logsd (tf.Tensor) – log standard deviation [B, …]

  • temparature (float) – temparature

Returns

sampled latent variable [B, …]

Return type

new_z(tf.Tensor)

Noto:

I cann’t gurantee it’s correctness. Please open the tensorflow probability’s Issue.

TFGENZOO.flows.utils.bits_x(log_likelihood: tensorflow.python.framework.ops.Tensor, log_det_jacobian: tensorflow.python.framework.ops.Tensor, pixels: int, n_bits: int = 8)[source]

bits/dims

Sources:

Parameters
  • log_likelihood (tf.Tensor) – shape is [batch_size,]

  • log_det_jacobian (tf.Tensor) – shape is [batch_size,]

  • pixels (int) – e.g. HWC image => H * W * C

  • n_bits (int) – e.g [0 255] image => 8 = log(256)

Returns

shape is [batch_size,]

Return type

bits_x

Note

formula

\[bits\_x = - \cfrac{(log\_likelihood + log\_det\_jacobian)} {pixels \log{2}} + n\_bits\]
TFGENZOO.flows.utils.split_feature(x: tensorflow.python.framework.ops.Tensor, type: str = 'split')[source]

type = [split, cross]

TODO: implement Haar downsampling