echoflow.core package¶
- 
class echoflow.core.BatchNorm(input_dims: int, momentum: float = 0.0, eps: float = 1e-05)[source]¶
- Bases: - echoflow.core.base.BaseFlow- Batch normalization from RealNVP. - Parameters
- input_dims – The number of input dimensions. 
- momentum – The momentum used to compute the running mean/var. 
 
 - 
forward(inputs: torch.Tensor, contexts: Optional[torch.Tensor] = None, inverse: bool = False)[source]¶
- Transform a batch of data. - Parameters
- inputs – The input tensor. 
- contexts – An optional context tensor (for conditional sampling). 
- inverse – Whether to apply the direct or inverse transform. 
 
- Returns
- outputs (torch.Tensor) – The output tensor. 
- logdet (torch.Tensor) – The log-determinant of the Jacobian. 
 
 
 - 
training= None¶
 
- 
class echoflow.core.Coupling(input_dims: int, hidden_dims: int, input_mask: torch.Tensor, context_dims: int = 0)[source]¶
- Bases: - echoflow.core.base.BaseFlow- Coupling layer from RealNVP. - The coupling layer partitions the input x into two parts, x1 and x2, and applies an invertible transform: \[\begin{split}y1 &= x1 \\ y2 &= x2 * exp(s(x1)) + t(x1)\end{split}\]- which modifies only one of the partitions. - Parameters
- input_dims – The number of input dimensions. 
- hidden_dims – The hidden size to use for the scale/translate nets. 
- input_mask – A binary mask for the input. 
- context_dims – The number of context dimensions. If specified, then the output is conditioned on context. 
 
 - 
forward(inputs: torch.Tensor, contexts: Optional[torch.Tensor] = None, inverse: bool = False)[source]¶
- Transform a batch of data. - Parameters
- inputs – The input tensor. 
- contexts – An optional context tensor (for conditional sampling). 
- inverse – Whether to apply the direct or inverse transform. 
 
- Returns
- outputs (torch.Tensor) – The output tensor. 
- logdet (torch.Tensor) – The log-determinant of the Jacobian. 
 
 
 - 
training= None¶
 
- 
class echoflow.core.MADE(input_dims, hidden_dims, context_dims=0)[source]¶
- Bases: - echoflow.core.base.BaseFlow- Normalizing flow layer. - 
forward(inputs: torch.Tensor, contexts: Optional[torch.Tensor] = None, inverse: bool = False)[source]¶
- Transform a batch of data. - Parameters
- inputs – The input tensor. 
- contexts – An optional context tensor (for conditional sampling). 
- inverse – Whether to apply the direct or inverse transform. 
 
- Returns
- outputs (torch.Tensor) – The output tensor. 
- logdet (torch.Tensor) – The log-determinant of the Jacobian. 
 
 
 - 
training= None¶
 
- 
- 
class echoflow.core.OneHot(cardinality: List[int])[source]¶
- Bases: - echoflow.core.base.BaseFlow- Normalizing flow layer. - 
forward(inputs: torch.Tensor, contexts: Optional[torch.Tensor] = None, inverse: bool = False)[source]¶
- Transform a batch of data. - Parameters
- inputs – The input tensor. 
- contexts – An optional context tensor (for conditional sampling). 
- inverse – Whether to apply the direct or inverse transform. 
 
- Returns
- outputs (torch.Tensor) – The output tensor. 
- logdet (torch.Tensor) – The log-determinant of the Jacobian. 
 
 
 - 
training= None¶
 
- 
- 
class echoflow.core.Reverse(input_dims: int)[source]¶
- Bases: - echoflow.core.base.BaseFlow- Reversing layer from MADE. - Normalizing flow layer. - 
forward(inputs: torch.Tensor, contexts: Optional[torch.Tensor] = None, inverse: bool = False)[source]¶
- Transform a batch of data. - Parameters
- inputs – The input tensor. 
- contexts – An optional context tensor (for conditional sampling). 
- inverse – Whether to apply the direct or inverse transform. 
 
- Returns
- outputs (torch.Tensor) – The output tensor. 
- logdet (torch.Tensor) – The log-determinant of the Jacobian. 
 
 
 - 
training= None¶
 
- 
- 
class echoflow.core.SequentialFlow(*modules)[source]¶
- Bases: - echoflow.core.base.BaseFlow- Apply a sequence of flows. - Parameters
- *args – A list of BaseFlow layers to apply in order. 
 - 
forward(inputs: torch.Tensor, contexts: Optional[torch.Tensor] = None, inverse: bool = False)[source]¶
- Transform a batch of data. - Parameters
- inputs – The input tensor. 
- contexts – An optional context tensor (for conditional sampling). 
- inverse – Whether to apply the direct or inverse transform. 
 
- Returns
- outputs (torch.Tensor) – The output tensor. 
- logdet (torch.Tensor) – The log-determinant of the Jacobian. 
 
 
 - 
training= None¶
 
Submodules¶
echoflow.core.base module¶
- 
class echoflow.core.base.BaseFlow[source]¶
- Bases: - torch.nn.modules.module.Module- Normalizing flow layer. - 
forward(inputs: torch.Tensor, contexts: Optional[torch.Tensor] = None, inverse: bool = False)[source]¶
- Transform a batch of data. - Parameters
- inputs – The input tensor. 
- contexts – An optional context tensor (for conditional sampling). 
- inverse – Whether to apply the direct or inverse transform. 
 
- Returns
- outputs (torch.Tensor) – The output tensor. 
- logdet (torch.Tensor) – The log-determinant of the Jacobian. 
 
 
 - 
training= None¶
 
- 
echoflow.core.batch_norm module¶
- 
class echoflow.core.batch_norm.BatchNorm(input_dims: int, momentum: float = 0.0, eps: float = 1e-05)[source]¶
- Bases: - echoflow.core.base.BaseFlow- Batch normalization from RealNVP. - Parameters
- input_dims – The number of input dimensions. 
- momentum – The momentum used to compute the running mean/var. 
 
 - 
forward(inputs: torch.Tensor, contexts: Optional[torch.Tensor] = None, inverse: bool = False)[source]¶
- Transform a batch of data. - Parameters
- inputs – The input tensor. 
- contexts – An optional context tensor (for conditional sampling). 
- inverse – Whether to apply the direct or inverse transform. 
 
- Returns
- outputs (torch.Tensor) – The output tensor. 
- logdet (torch.Tensor) – The log-determinant of the Jacobian. 
 
 
 - 
training= None¶
 
echoflow.core.coupling module¶
- 
class echoflow.core.coupling.Coupling(input_dims: int, hidden_dims: int, input_mask: torch.Tensor, context_dims: int = 0)[source]¶
- Bases: - echoflow.core.base.BaseFlow- Coupling layer from RealNVP. - The coupling layer partitions the input x into two parts, x1 and x2, and applies an invertible transform: \[\begin{split}y1 &= x1 \\ y2 &= x2 * exp(s(x1)) + t(x1)\end{split}\]- which modifies only one of the partitions. - Parameters
- input_dims – The number of input dimensions. 
- hidden_dims – The hidden size to use for the scale/translate nets. 
- input_mask – A binary mask for the input. 
- context_dims – The number of context dimensions. If specified, then the output is conditioned on context. 
 
 - 
forward(inputs: torch.Tensor, contexts: Optional[torch.Tensor] = None, inverse: bool = False)[source]¶
- Transform a batch of data. - Parameters
- inputs – The input tensor. 
- contexts – An optional context tensor (for conditional sampling). 
- inverse – Whether to apply the direct or inverse transform. 
 
- Returns
- outputs (torch.Tensor) – The output tensor. 
- logdet (torch.Tensor) – The log-determinant of the Jacobian. 
 
 
 - 
training= None¶
 
echoflow.core.made module¶
- 
class echoflow.core.made.MADE(input_dims, hidden_dims, context_dims=0)[source]¶
- Bases: - echoflow.core.base.BaseFlow- Normalizing flow layer. - 
forward(inputs: torch.Tensor, contexts: Optional[torch.Tensor] = None, inverse: bool = False)[source]¶
- Transform a batch of data. - Parameters
- inputs – The input tensor. 
- contexts – An optional context tensor (for conditional sampling). 
- inverse – Whether to apply the direct or inverse transform. 
 
- Returns
- outputs (torch.Tensor) – The output tensor. 
- logdet (torch.Tensor) – The log-determinant of the Jacobian. 
 
 
 - 
training= None¶
 
- 
- 
class echoflow.core.made.MaskedLinear(input_dims: int, out_features: int, weight_mask: torch.Tensor, context_dims: int = 0)[source]¶
- Bases: - echoflow.core.base.BaseFlow- Normalizing flow layer. - 
forward(inputs: torch.Tensor, contexts: Optional[torch.Tensor] = None, inverse: bool = False)[source]¶
- Transform a batch of data. - Parameters
- inputs – The input tensor. 
- contexts – An optional context tensor (for conditional sampling). 
- inverse – Whether to apply the direct or inverse transform. 
 
- Returns
- outputs (torch.Tensor) – The output tensor. 
- logdet (torch.Tensor) – The log-determinant of the Jacobian. 
 
 
 - 
training= None¶
 
- 
echoflow.core.one_hot module¶
- 
class echoflow.core.one_hot.OneHot(cardinality: List[int])[source]¶
- Bases: - echoflow.core.base.BaseFlow- Normalizing flow layer. - 
forward(inputs: torch.Tensor, contexts: Optional[torch.Tensor] = None, inverse: bool = False)[source]¶
- Transform a batch of data. - Parameters
- inputs – The input tensor. 
- contexts – An optional context tensor (for conditional sampling). 
- inverse – Whether to apply the direct or inverse transform. 
 
- Returns
- outputs (torch.Tensor) – The output tensor. 
- logdet (torch.Tensor) – The log-determinant of the Jacobian. 
 
 
 - 
training= None¶
 
- 
echoflow.core.reverse module¶
- 
class echoflow.core.reverse.Reverse(input_dims: int)[source]¶
- Bases: - echoflow.core.base.BaseFlow- Reversing layer from MADE. - Normalizing flow layer. - 
forward(inputs: torch.Tensor, contexts: Optional[torch.Tensor] = None, inverse: bool = False)[source]¶
- Transform a batch of data. - Parameters
- inputs – The input tensor. 
- contexts – An optional context tensor (for conditional sampling). 
- inverse – Whether to apply the direct or inverse transform. 
 
- Returns
- outputs (torch.Tensor) – The output tensor. 
- logdet (torch.Tensor) – The log-determinant of the Jacobian. 
 
 
 - 
training= None¶
 
- 
echoflow.core.sequential module¶
- 
class echoflow.core.sequential.SequentialFlow(*modules)[source]¶
- Bases: - echoflow.core.base.BaseFlow- Apply a sequence of flows. - Parameters
- *args – A list of BaseFlow layers to apply in order. 
 - 
forward(inputs: torch.Tensor, contexts: Optional[torch.Tensor] = None, inverse: bool = False)[source]¶
- Transform a batch of data. - Parameters
- inputs – The input tensor. 
- contexts – An optional context tensor (for conditional sampling). 
- inverse – Whether to apply the direct or inverse transform. 
 
- Returns
- outputs (torch.Tensor) – The output tensor. 
- logdet (torch.Tensor) – The log-determinant of the Jacobian. 
 
 
 - 
training= None¶