TFGENZOO.flows package¶
Subpackages¶
Submodules¶
- TFGENZOO.flows.actnorm module
- TFGENZOO.flows.actnorm_test module
- TFGENZOO.flows.affine_coupling module
- TFGENZOO.flows.affine_coupling_test module
- TFGENZOO.flows.cond_affine_coupling module
- TFGENZOO.flows.cond_affine_coupling_test module
- TFGENZOO.flows.factor_out module
- TFGENZOO.flows.flatten module
- TFGENZOO.flows.flowbase module
- TFGENZOO.flows.flowmodel module
- TFGENZOO.flows.inv1x1conv module
- TFGENZOO.flows.inv1x1conv_test module
- TFGENZOO.flows.quantize module
- TFGENZOO.flows.quantize_test module
- TFGENZOO.flows.squeeze module
- TFGENZOO.flows.squeeze_test module
Module contents¶
-
class
TFGENZOO.flows.
FactorOutBase
(with_zaux: bool = False, **kwargs)[source]¶ Bases:
TFGENZOO.flows.flowbase.FlowBase
Factor Out Layer in Flow-based Model
Examples
>>> fo = FactorOutBase(with_zaux=False) >>> z, zaux = fo(x, zaux=None, inverse=False) >>> x = fo(z, zaux=zaux, inverse=True)
>>> fo = FactorOutBase(with_zaux=True) >>> z, zaux = fo(x, zaux=zaux, inverse=False) >>> x, zaux = fo(z, zaux=zaux, inverse=True)
-
build
(input_shape: tensorflow.python.framework.tensor_shape.TensorShape)[source]¶ Creates the variables of the layer (optional, for subclass implementers).
This is a method that implementers of subclasses of Layer or Model can override if they need a state-creation step in-between layer instantiation and layer call.
This is typically used to create the weights of Layer subclasses.
- Parameters
input_shape – Instance of TensorShape, or list of instances of TensorShape if the layer expects a list of inputs (one instance per input).
-
call
(x: tensorflow.python.framework.ops.Tensor, zaux: tensorflow.python.framework.ops.Tensor = None, inverse=False, **kwargs)[source]¶ This is where the layer’s logic lives.
- Parameters
inputs – Input tensor, or list/tuple of input tensors.
**kwargs – Additional keyword arguments.
- Returns
A tensor or list/tuple of tensors.
-
abstract
forward
(x: tensorflow.python.framework.ops.Tensor, zaux: tensorflow.python.framework.ops.Tensor, **kwargs)[source]¶
-
get_config
()[source]¶ Returns the config of the layer.
A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.
The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).
- Returns
Python dictionary.
-
-
class
TFGENZOO.flows.
FlowComponent
(**kwargs)[source]¶ Bases:
TFGENZOO.flows.flowbase.FlowBase
Flow-based model’s abstruct class
Note
This layer will be inheritanced by the invertible layer with log det jacobian
-
assert_log_det_jacobian
(log_det_jacobian: tensorflow.python.framework.ops.Tensor)[source]¶ assert log_det_jacobian’s shape
Note
tf-2.0’s bugtf.debugging.assert_shapes([(tf.constant(1.0), (None, ))])# => None (true)tf.debugging.assert_shapes([(tf.constant([1.0, 1.0]), (None, ))])# => None (true)tf.debugging.assert_shapes([(tf.constant([[1.0], [1.0]]), (None, ))])# => Error
-
-
class
TFGENZOO.flows.
FlowModule
(components: List[TFGENZOO.flows.flowbase.FlowComponent], **kwargs)[source]¶ Bases:
TFGENZOO.flows.flowbase.FlowBase
Sequential Layer for FlowBase’s Layer
Examples
>>> layers = [FlowBase() for _ in range(10)] >>> module = FlowModule(layers) >>> z = module(x, inverse=False) >>> x_hat = module(z, inverse=True) >>> assert ((x - x_hat)^2) << 1e-3
-
build
(input_shape: tensorflow.python.framework.tensor_shape.TensorShape)[source]¶ Creates the variables of the layer (optional, for subclass implementers).
This is a method that implementers of subclasses of Layer or Model can override if they need a state-creation step in-between layer instantiation and layer call.
This is typically used to create the weights of Layer subclasses.
- Parameters
input_shape – Instance of TensorShape, or list of instances of TensorShape if the layer expects a list of inputs (one instance per input).
-
get_config
()[source]¶ Returns the config of the layer.
A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.
The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).
- Returns
Python dictionary.
-
-
class
TFGENZOO.flows.
Actnorm
(scale: float = 1.0, logscale_factor: float = 3.0, **kwargs)[source]¶ Bases:
TFGENZOO.flows.flowbase.FlowComponent
Actnorm Layer
Sources:
Note
- initialize
- mean = mean(first_batch)var = variance(first_batch)logs = log(scale / sqrt(var)) / logscale_factorbias = - mean
- forward formula
- logs = logs * logscale_factorscale = exp(logs)z = (x + bias) * scalelog_det_jacobain = sum(logs) * H * W
- inverse formula
- logs = logs * logsscale_factorinv_scale = exp(-logs)z = x * inv_scale - biasinverse_log_det_jacobian = sum(- logs) * H * W
-
calc_ldj
¶ bool flag of calculate log det jacobian
-
scale
¶ float initialize batch’s variance scaling
-
logscale_factor
¶ float barrier log value to - Inf
Examples
>>> import tensorflow as tf >>> import TFGENZOO.flows import Actnorm >>> ac = Actnorm() >>> ac.build([None, 16, 16, 4]) >>> ac.get_config() {'name': 'actnorm_1', ... } >>> inputs = tf.keras.Input([16, 16, 4]) >>> ac(inputs) (<tf.Tensor 'actnorm_1_2/Identity:0' shape=(None, 16, 16, 4) dtype=float32>, <tf.Tensor 'actnorm_1_2/Identity_1:0' shape=(None,) dtype=float32>) >>> tf.keras.Model(inputs, ac(inputs)).summary() Model: "model_5" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_3 (InputLayer) [(None, 16, 16, 4)] 0 _________________________________________________________________ actnorm_1 (Actnorm) ((None, 16, 16, 4), (None 9 ================================================================= Total params: 9 Trainable params: 0 Non-trainable params: 9 _________________________________________________________________
-
build
(input_shape: tensorflow.python.framework.tensor_shape.TensorShape)[source]¶ Creates the variables of the layer (optional, for subclass implementers).
This is a method that implementers of subclasses of Layer or Model can override if they need a state-creation step in-between layer instantiation and layer call.
This is typically used to create the weights of Layer subclasses.
- Parameters
input_shape – Instance of TensorShape, or list of instances of TensorShape if the layer expects a list of inputs (one instance per input).
-
get_config
()[source]¶ Returns the config of the layer.
A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.
The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).
- Returns
Python dictionary.
-
class
TFGENZOO.flows.
AffineCoupling
(mask_type: TFGENZOO.flows.affine_coupling.AffineCouplingMask = <AffineCouplingMask.ChannelWise: 1>, scale_shift_net: tensorflow.python.keras.engine.base_layer.Layer = None, scale_shift_net_template: Callable[[tensorflow.python.keras.engine.input_layer.Input], tensorflow.python.keras.engine.training.Model] = None, scale_type='safe_exp', **kwargs)[source]¶ Bases:
TFGENZOO.flows.flowbase.FlowComponent
Affine Coupling Layer
Note
- forward formula
- [x1, x2] = split(x)log_scale, shift = NN(x1)scale = sigmoid(log_scale + 2.0)z1 = x1z2 = (x2 + shift) * scalez = concat([z1, z2])LogDetJacobian = sum(log(scale))
- inverse formula
- [z1, z2] = split(x)log_scale, shift = NN(z1)scale = sigmoid(log_scale + 2.0)x1 = z1x2 = z2 / scale - shiftz = concat([x1, x2])InverseLogDetJacobian = - sum(log(scale))
- implementation notes
- in Glow’s Paper, scale is calculated by exp(log_scale),but IN IMPLEMENTATION, scale is done by sigmoid(log_scale + 2.0)
Examples
>>> import tensorflow as tf >>> from TFGENZOO.flows.affine_coupling import AffineCoupling >>> from TFGENZOO.layers.resnet import ShallowResNet >>> af = AffineCoupling(scale_shift_net_template=ShallowResNet) >>> af.build([None, 16, 16, 4]) >>> af.get_config() {'name': 'affine_coupling_1', ...} >>> inputs = tf.keras.Input([16, 16, 4]) >>> af(inputs) (<tf.Tensor 'affine_coupling_3_2/Identity:0' shape=(None, 16, 16, 4) dtype=float32>, <tf.Tensor 'affine_coupling_3_2/Identity_1:0' shape=(None,) dtype=float32>) >>> tf.keras.Model(inputs, af(inputs)).summary() Model: "model_1" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_3 (InputLayer) [(None, 16, 16, 4)] 0 _________________________________________________________________ affine_coupling (AffineCoupl ((None, 16, 16, 4), (None 2389003 ================================================================= Total params: 2,389,003 Trainable params: 0 Non-trainable params: 2,389,003 _________________________________________________________________
-
build
(input_shape: tensorflow.python.framework.tensor_shape.TensorShape)[source]¶ Creates the variables of the layer (optional, for subclass implementers).
This is a method that implementers of subclasses of Layer or Model can override if they need a state-creation step in-between layer instantiation and layer call.
This is typically used to create the weights of Layer subclasses.
- Parameters
input_shape – Instance of TensorShape, or list of instances of TensorShape if the layer expects a list of inputs (one instance per input).
-
get_config
()[source]¶ Returns the config of the layer.
A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.
The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).
- Returns
Python dictionary.
-
class
TFGENZOO.flows.
LogScale
(log_scale_factor: float = 3.0, **kwargs)[source]¶ Bases:
tensorflow.python.keras.engine.base_layer.Layer
-
build
(input_shape: tensorflow.python.framework.tensor_shape.TensorShape)[source]¶ Creates the variables of the layer (optional, for subclass implementers).
This is a method that implementers of subclasses of Layer or Model can override if they need a state-creation step in-between layer instantiation and layer call.
This is typically used to create the weights of Layer subclasses.
- Parameters
input_shape – Instance of TensorShape, or list of instances of TensorShape if the layer expects a list of inputs (one instance per input).
-
call
(x: tensorflow.python.framework.ops.Tensor)[source]¶ This is where the layer’s logic lives.
- Parameters
inputs – Input tensor, or list/tuple of input tensors.
**kwargs – Additional keyword arguments.
- Returns
A tensor or list/tuple of tensors.
-
get_config
()[source]¶ Returns the config of the layer.
A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.
The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).
- Returns
Python dictionary.
-
-
class
TFGENZOO.flows.
Inv1x1Conv
(log_det_type: str = 'slogdet', **kwargs)[source]¶ Bases:
TFGENZOO.flows.flowbase.FlowComponent
Invertible 1x1 Convolution Layer
- Sources:
https://arxiv.org/pdf/1807.03039.pdf https://github.com/openai/glow/blob/master/model.py#L457-L472
Note
- forward formula
- \[\begin{split}\forall i, j: z_{i, j} &= Wx_{i, j} \\ LogDetJacobian &= hw \log|det(W)|\\ , where &\\ W &\in \mathbb{R}^{c imes c}\\ x &\in \mathbb{R}^{b \times h\times w \times c}\ \ \ ({\rm batch, height, width, channel})\end{split}\]
- inverse formula
- \[\begin{split}\forall i, j: x_{i, j} &= W^{-1} z_{i, j}\\ InverseLogDetJacobian &= - h w \log|det(W)|\\ , where &\\ W &\in \mathbb{R}^{c\times c}\\ x &\in \mathbb{R}^{b \times h\times w \times c}\ \ \ ({\rm batch, height, width, channel})\end{split}\]
Examples
>>> import tensorflow as tf >>> from TFGENZOO.flows import Inv1x1Conv >>> ic = Inv1x1Conv() >>> ic.build([None, 16, 16, 4]) >>> ic.get_config() {'name': 'inv1x1_conv_1', 'trainable': {}, 'dtype': 'float32'} >>> inputs = tf.keras.Input([16, 16, 4]) >>> tf.keras.Model(inputs, ic(inputs)).summary() Layer (type) Output Shape Param # ================================================================= input_3 (InputLayer) [(None, 16, 16, 4)] 0 _________________________________________________________________ inv1x1_conv_1 (Inv1x1Conv) ((None, 16, 16, 4), (None 17 ================================================================= Total params: 17 Trainable params: 0 Non-trainable params: 17 _________________________________________________________________
-
build
(input_shape: tensorflow.python.framework.tensor_shape.TensorShape)[source]¶ Creates the variables of the layer (optional, for subclass implementers).
This is a method that implementers of subclasses of Layer or Model can override if they need a state-creation step in-between layer instantiation and layer call.
This is typically used to create the weights of Layer subclasses.
- Parameters
input_shape – Instance of TensorShape, or list of instances of TensorShape if the layer expects a list of inputs (one instance per input).
-
get_config
()[source]¶ Returns the config of the layer.
A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.
The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).
- Returns
Python dictionary.
-
TFGENZOO.flows.
regular_matrix_init
(shape: Tuple[int, int], dtype=None)[source]¶ initialize with orthogonal matrix
- Parameters
shape – generated matrix’s shape [C, C]
dtype –
- Returns
w_init, orthogonal matrix [C, C]
- Return type
np.array
-
class
TFGENZOO.flows.
LogitifyImage
(corruption_level=1.0, alpha=0.05)[source]¶ Bases:
TFGENZOO.flows.flowbase.FlowBase
Apply Tapani Raiko’s dequantization and express image in terms of logits
Sources:
https://github.com/taesungp/real-nvp/blob/master/real_nvp/model.py https://github.com/taesungp/real-nvp/blob/master/real_nvp/model.py#L42-L54 https://github.com/tensorflow/models/blob/fe4e6b653141a197779d752b422419493e5d9128/research/real_nvp/real_nvp_multiscale_dataset.py#L1073-L1077 https://github.com/masa-su/pixyz/blob/master/pixyz/flows/operations.py#L253-L254 https://github.com/fmu2/realNVP/blob/8d36691df215af3678440ccb7c01a13d2b441a4a/data_utils.py#L112-L119
- Parameters
corrupution_level (float) – power of added random variable.
alpha (float) – parameter about transform close interval to open interval [0, 1] to (1, 0)
Note
We know many implementation on this quantization, but we use this formula. since many implementations use it.
- forward preprocess (add noise)
- \[\begin{split}z &\leftarrow 255.0 x \ \because [0, 1] \rightarrow [0, 255] \\ z &\leftarrow z + \text{corruption_level} \times \epsilon \ where\ \epsilon \sim N(0, 1)\\ z &\leftarrow z / (\text{corruption_level} + 255.0)\\ z &\leftarrow z (1 - \alpha) + 0.5 \alpha \ \because \ [0, 1] \rightarrow (0, 1) \\ z &\leftarrow \log(z) - \log(1 -z)\end{split}\]
- forward formula
- \[\begin{split}z &= logit(x (1 - \alpha) + 0.5 \alpha)\\ &= \log(x) - \log(1 - x)\\ LogDetJacobian &= sum(softplus(z) + softplus(-z) - softplus(\log(\cfrac{\alpha}{1 - \alpha})))\end{split}\]
- inverse formula
- \[\begin{split}x &= logisitic(z)\\ &= 1 / (1 + exp( -z )) \\ x &= (x - 0.5 * \alpha) / (1.0 - \alpha)\\ InverseLogDetJacobian &= sum(-2 \log(logistic(z)) - z) + softplus(\log(\cfrac{\alpha}{1 - \alpha})))\end{split}\]
Examples
>>> import tensorflow as tf >>> from TFGENZOO.flows import LogitifyImage >>> li = LogitifyImage() >>> li.build([None, 32, 32, 1]) >>> li.get_config() {'name': 'logitify_image_1', ...} >>> inputs = tf.keras.Input([32, 32, 1]) >>> li(inputs) (<tf.Tensor 'logitify_image/Identity:0' shape=(None, 32, 32, 1) dtype=float32>, <tf.Tensor 'logitify_image/Identity_1:0' shape=(None,) dtype=float32>) >>> tf.keras.Model(inputs, li(inputs)).summary() Model: "model" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) [(None, 32, 32, 1)] 0 _________________________________________________________________ logitify_image (LogitifyImag ((None, 32, 32, 1), (None 1 ================================================================= Total params: 1 Trainable params: 0 Non-trainable params: 1 _________________________________________________________________
-
build
(input_shape: tensorflow.python.framework.tensor_shape.TensorShape)[source]¶ Creates the variables of the layer (optional, for subclass implementers).
This is a method that implementers of subclasses of Layer or Model can override if they need a state-creation step in-between layer instantiation and layer call.
This is typically used to create the weights of Layer subclasses.
- Parameters
input_shape – Instance of TensorShape, or list of instances of TensorShape if the layer expects a list of inputs (one instance per input).
-
get_config
()[source]¶ Returns the config of the layer.
A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.
The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).
- Returns
Python dictionary.
-
class
TFGENZOO.flows.
Flatten
(**kwargs)[source]¶ Bases:
TFGENZOO.flows.flowbase.FlowComponent
Flatten Layer Sources:
Examples
>>> import tenosorflow as tf >>> from TFGENZOO.flows import Flatten >>> fl = Flatten() >>> fl.build([None, 16, 16, 2]) >>> fl(inputs) (<tf.Tensor 'flatten_2_2/Identity:0' shape=(None, 512) dtype=float32>, <tf.Tensor 'flatten_2_2/Identity_1:0' shape=(None,) dtype=float32>) >>> tf.keras.Model(inputs, fl(inputs)).summary() Model: "model" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) [(None, 16, 16, 2)] 0 _________________________________________________________________ flatten_2 (Flatten) ((None, 512), (None,)) 1 ================================================================= Total params: 1 Trainable params: 0 Non-trainable params: 1 _________________________________________________________________ >>> z, ldj = fl(tf.random.normal([1024, 16, 16, 2])) >>> x, ildj = fl(z, invere=True) >>> x.shape TensorShape([1024, 16, 16, 2])
-
build
(input_shape)[source]¶ Creates the variables of the layer (optional, for subclass implementers).
This is a method that implementers of subclasses of Layer or Model can override if they need a state-creation step in-between layer instantiation and layer call.
This is typically used to create the weights of Layer subclasses.
- Parameters
input_shape – Instance of TensorShape, or list of instances of TensorShape if the layer expects a list of inputs (one instance per input).
-
get_config
()[source]¶ Returns the config of the layer.
A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.
The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).
- Returns
Python dictionary.
-
-
class
TFGENZOO.flows.
Squeeze
(with_zaux=False)[source]¶ Bases:
TFGENZOO.flows.flowbase.FlowBase
Squeeze Layer
Sources:
Note
- forward formula
- z = reshape(x, [B, H // 2, W // 2, C * 4])
- inverse formula
- x = reshape(z, [B, H, W, C])
checkerboard spacing
e.g.
[[[[1], [2], [5], [6]],[[3], [4], [7], [8]],[[9], [10], [13], [14]],[[11], [12], [15], [16]]]]to
[[[ 1, 5],[ 9, 13]]][[[ 2, 6],[10, 14]]][[[ 3, 7],[11, 15]]][[[ 4, 8],[12, 16]]]
-
forward
(x: tensorflow.python.framework.ops.Tensor, zaux: tensorflow.python.framework.ops.Tensor = None, **kwargs)[source]¶
-
get_config
()[source]¶ Returns the config of the layer.
A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.
The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).
- Returns
Python dictionary.