Model: "encoder_r1" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_1 (InputLayer) [(None, 4096, 3)] 0 __________________________________________________________________________________________________ shared_conv_0 (Conv1D) (None, 4096, 96) 18528 input_1[0][0] __________________________________________________________________________________________________ shared_conv_0_batchnorm (BatchN (None, 4096, 96) 384 shared_conv_0[0][0] __________________________________________________________________________________________________ leaky_re_lu (LeakyReLU) multiple 0 shared_conv_0_batchnorm[0][0] shared_conv_2_batchnorm[0][0] shared_conv_4_batchnorm[0][0] shared_conv_6_batchnorm[0][0] shared_conv_8_batchnorm[0][0] shared_conv_10_batchnorm[0][0] r1_dense_1_batchnorm[0][0] r1_dense_2_batchnorm[0][0] r1_dense_3_batchnorm[0][0] __________________________________________________________________________________________________ max_pooling1d (MaxPooling1D) (None, 2048, 96) 0 leaky_re_lu[0][0] __________________________________________________________________________________________________ shared_conv_2 (Conv1D) (None, 2048, 64) 393280 max_pooling1d[0][0] __________________________________________________________________________________________________ shared_conv_2_batchnorm (BatchN (None, 2048, 64) 256 shared_conv_2[0][0] __________________________________________________________________________________________________ max_pooling1d_1 (MaxPooling1D) (None, 1024, 64) 0 leaky_re_lu[1][0] __________________________________________________________________________________________________ shared_conv_4 (Conv1D) (None, 1024, 64) 131136 max_pooling1d_1[0][0] __________________________________________________________________________________________________ shared_conv_4_batchnorm (BatchN (None, 1024, 64) 256 shared_conv_4[0][0] __________________________________________________________________________________________________ max_pooling1d_2 (MaxPooling1D) (None, 512, 64) 0 leaky_re_lu[2][0] __________________________________________________________________________________________________ shared_conv_6 (Conv1D) (None, 512, 64) 131136 max_pooling1d_2[0][0] __________________________________________________________________________________________________ shared_conv_6_batchnorm (BatchN (None, 512, 64) 256 shared_conv_6[0][0] __________________________________________________________________________________________________ max_pooling1d_3 (MaxPooling1D) (None, 256, 64) 0 leaky_re_lu[3][0] __________________________________________________________________________________________________ shared_conv_8 (Conv1D) (None, 256, 64) 65600 max_pooling1d_3[0][0] __________________________________________________________________________________________________ shared_conv_8_batchnorm (BatchN (None, 256, 64) 256 shared_conv_8[0][0] __________________________________________________________________________________________________ max_pooling1d_4 (MaxPooling1D) (None, 128, 64) 0 leaky_re_lu[4][0] __________________________________________________________________________________________________ shared_conv_10 (Conv1D) (None, 128, 64) 65600 max_pooling1d_4[0][0] __________________________________________________________________________________________________ shared_conv_10_batchnorm (Batch (None, 128, 64) 256 shared_conv_10[0][0] __________________________________________________________________________________________________ max_pooling1d_5 (MaxPooling1D) (None, 64, 64) 0 leaky_re_lu[5][0] __________________________________________________________________________________________________ flatten (Flatten) (None, 4096) 0 max_pooling1d_5[0][0] __________________________________________________________________________________________________ r1_dense_1 (Dense) (None, 4096) 16781312 flatten[0][0] __________________________________________________________________________________________________ r1_dense_1_batchnorm (BatchNorm (None, 4096) 16384 r1_dense_1[0][0] __________________________________________________________________________________________________ r1_dense_2 (Dense) (None, 2048) 8390656 leaky_re_lu[6][0] __________________________________________________________________________________________________ r1_dense_2_batchnorm (BatchNorm (None, 2048) 8192 r1_dense_2[0][0] __________________________________________________________________________________________________ r1_dense_3 (Dense) (None, 1024) 2098176 leaky_re_lu[7][0] __________________________________________________________________________________________________ r1_dense_3_batchnorm (BatchNorm (None, 1024) 4096 r1_dense_3[0][0] __________________________________________________________________________________________________ r1_mean_dense (Dense) (None, 6) 6150 leaky_re_lu[8][0] __________________________________________________________________________________________________ r1_logvar_dense (Dense) (None, 6) 6150 leaky_re_lu[8][0] __________________________________________________________________________________________________ r1_modes_dense (Dense) (None, 3) 3075 leaky_re_lu[8][0] __________________________________________________________________________________________________ concatenate (Concatenate) (None, 15) 0 r1_mean_dense[0][0] r1_logvar_dense[0][0] r1_modes_dense[0][0] ================================================================================================== Total params: 28,121,135 Trainable params: 28,105,967 Non-trainable params: 15,168 __________________________________________________________________________________________________ Model: "encoder_q" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_1 (InputLayer) [(None, 4096, 3)] 0 __________________________________________________________________________________________________ shared_conv_0 (Conv1D) (None, 4096, 96) 18528 input_1[0][0] __________________________________________________________________________________________________ shared_conv_0_batchnorm (BatchN (None, 4096, 96) 384 shared_conv_0[0][0] __________________________________________________________________________________________________ leaky_re_lu (LeakyReLU) multiple 0 shared_conv_0_batchnorm[0][0] shared_conv_2_batchnorm[0][0] shared_conv_4_batchnorm[0][0] shared_conv_6_batchnorm[0][0] shared_conv_8_batchnorm[0][0] shared_conv_10_batchnorm[0][0] batch_normalization[0][0] q_dense_1_batchnorm[0][0] q_dense_2_batchnorm[0][0] q_dense_3_batchnorm[0][0] __________________________________________________________________________________________________ max_pooling1d (MaxPooling1D) (None, 2048, 96) 0 leaky_re_lu[0][0] __________________________________________________________________________________________________ shared_conv_2 (Conv1D) (None, 2048, 64) 393280 max_pooling1d[0][0] __________________________________________________________________________________________________ shared_conv_2_batchnorm (BatchN (None, 2048, 64) 256 shared_conv_2[0][0] __________________________________________________________________________________________________ max_pooling1d_1 (MaxPooling1D) (None, 1024, 64) 0 leaky_re_lu[1][0] __________________________________________________________________________________________________ shared_conv_4 (Conv1D) (None, 1024, 64) 131136 max_pooling1d_1[0][0] __________________________________________________________________________________________________ shared_conv_4_batchnorm (BatchN (None, 1024, 64) 256 shared_conv_4[0][0] __________________________________________________________________________________________________ max_pooling1d_2 (MaxPooling1D) (None, 512, 64) 0 leaky_re_lu[2][0] __________________________________________________________________________________________________ shared_conv_6 (Conv1D) (None, 512, 64) 131136 max_pooling1d_2[0][0] __________________________________________________________________________________________________ shared_conv_6_batchnorm (BatchN (None, 512, 64) 256 shared_conv_6[0][0] __________________________________________________________________________________________________ max_pooling1d_3 (MaxPooling1D) (None, 256, 64) 0 leaky_re_lu[3][0] __________________________________________________________________________________________________ shared_conv_8 (Conv1D) (None, 256, 64) 65600 max_pooling1d_3[0][0] __________________________________________________________________________________________________ shared_conv_8_batchnorm (BatchN (None, 256, 64) 256 shared_conv_8[0][0] __________________________________________________________________________________________________ input_2 (InputLayer) [(None, 2)] 0 __________________________________________________________________________________________________ max_pooling1d_4 (MaxPooling1D) (None, 128, 64) 0 leaky_re_lu[4][0] __________________________________________________________________________________________________ flatten_1 (Flatten) (None, 2) 0 input_2[0][0] __________________________________________________________________________________________________ shared_conv_10 (Conv1D) (None, 128, 64) 65600 max_pooling1d_4[0][0] __________________________________________________________________________________________________ q_inx_dense (Dense) (None, 64) 192 flatten_1[0][0] __________________________________________________________________________________________________ shared_conv_10_batchnorm (Batch (None, 128, 64) 256 shared_conv_10[0][0] __________________________________________________________________________________________________ tf.reshape (TFOpLambda) (None, 64, 1) 0 q_inx_dense[0][0] __________________________________________________________________________________________________ batch_normalization (BatchNorma (None, 64, 1) 4 tf.reshape[0][0] __________________________________________________________________________________________________ max_pooling1d_5 (MaxPooling1D) (None, 64, 64) 0 leaky_re_lu[5][0] __________________________________________________________________________________________________ concatenate_1 (Concatenate) (None, 64, 65) 0 max_pooling1d_5[0][0] leaky_re_lu[9][0] __________________________________________________________________________________________________ flatten_2 (Flatten) (None, 4160) 0 concatenate_1[0][0] __________________________________________________________________________________________________ q_dense_1 (Dense) (None, 4096) 17043456 flatten_2[0][0] __________________________________________________________________________________________________ q_dense_1_batchnorm (BatchNorma (None, 4096) 16384 q_dense_1[0][0] __________________________________________________________________________________________________ q_dense_2 (Dense) (None, 2048) 8390656 leaky_re_lu[10][0] __________________________________________________________________________________________________ q_dense_2_batchnorm (BatchNorma (None, 2048) 8192 q_dense_2[0][0] __________________________________________________________________________________________________ q_dense_3 (Dense) (None, 1024) 2098176 leaky_re_lu[11][0] __________________________________________________________________________________________________ q_dense_3_batchnorm (BatchNorma (None, 1024) 4096 q_dense_3[0][0] __________________________________________________________________________________________________ q_mean_dense (Dense) (None, 2) 2050 leaky_re_lu[12][0] __________________________________________________________________________________________________ q_logvar_dense (Dense) (None, 2) 2050 leaky_re_lu[12][0] __________________________________________________________________________________________________ concatenate_2 (Concatenate) (None, 4) 0 q_mean_dense[0][0] q_logvar_dense[0][0] ================================================================================================== Total params: 28,372,200 Trainable params: 28,357,030 Non-trainable params: 15,170 __________________________________________________________________________________________________ Model: "decoder_r2" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_1 (InputLayer) [(None, 4096, 3)] 0 __________________________________________________________________________________________________ shared_conv_0 (Conv1D) (None, 4096, 96) 18528 input_1[0][0] __________________________________________________________________________________________________ shared_conv_0_batchnorm (BatchN (None, 4096, 96) 384 shared_conv_0[0][0] __________________________________________________________________________________________________ leaky_re_lu (LeakyReLU) multiple 0 shared_conv_0_batchnorm[0][0] shared_conv_2_batchnorm[0][0] shared_conv_4_batchnorm[0][0] shared_conv_6_batchnorm[0][0] shared_conv_8_batchnorm[0][0] shared_conv_10_batchnorm[0][0] batch_normalization_1[0][0] r2_dense_1_batchnorm[0][0] r2_dense_2_batchnorm[0][0] r2_dense_3_batchnorm[0][0] __________________________________________________________________________________________________ max_pooling1d (MaxPooling1D) (None, 2048, 96) 0 leaky_re_lu[0][0] __________________________________________________________________________________________________ shared_conv_2 (Conv1D) (None, 2048, 64) 393280 max_pooling1d[0][0] __________________________________________________________________________________________________ shared_conv_2_batchnorm (BatchN (None, 2048, 64) 256 shared_conv_2[0][0] __________________________________________________________________________________________________ max_pooling1d_1 (MaxPooling1D) (None, 1024, 64) 0 leaky_re_lu[1][0] __________________________________________________________________________________________________ shared_conv_4 (Conv1D) (None, 1024, 64) 131136 max_pooling1d_1[0][0] __________________________________________________________________________________________________ shared_conv_4_batchnorm (BatchN (None, 1024, 64) 256 shared_conv_4[0][0] __________________________________________________________________________________________________ max_pooling1d_2 (MaxPooling1D) (None, 512, 64) 0 leaky_re_lu[2][0] __________________________________________________________________________________________________ shared_conv_6 (Conv1D) (None, 512, 64) 131136 max_pooling1d_2[0][0] __________________________________________________________________________________________________ shared_conv_6_batchnorm (BatchN (None, 512, 64) 256 shared_conv_6[0][0] __________________________________________________________________________________________________ max_pooling1d_3 (MaxPooling1D) (None, 256, 64) 0 leaky_re_lu[3][0] __________________________________________________________________________________________________ shared_conv_8 (Conv1D) (None, 256, 64) 65600 max_pooling1d_3[0][0] __________________________________________________________________________________________________ shared_conv_8_batchnorm (BatchN (None, 256, 64) 256 shared_conv_8[0][0] __________________________________________________________________________________________________ input_3 (InputLayer) [(None, 2)] 0 __________________________________________________________________________________________________ max_pooling1d_4 (MaxPooling1D) (None, 128, 64) 0 leaky_re_lu[4][0] __________________________________________________________________________________________________ flatten_3 (Flatten) (None, 2) 0 input_3[0][0] __________________________________________________________________________________________________ shared_conv_10 (Conv1D) (None, 128, 64) 65600 max_pooling1d_4[0][0] __________________________________________________________________________________________________ r2_inz_dense (Dense) (None, 64) 192 flatten_3[0][0] __________________________________________________________________________________________________ shared_conv_10_batchnorm (Batch (None, 128, 64) 256 shared_conv_10[0][0] __________________________________________________________________________________________________ tf.reshape_1 (TFOpLambda) (None, 64, 1) 0 r2_inz_dense[0][0] __________________________________________________________________________________________________ batch_normalization_1 (BatchNor (None, 64, 1) 4 tf.reshape_1[0][0] __________________________________________________________________________________________________ max_pooling1d_5 (MaxPooling1D) (None, 64, 64) 0 leaky_re_lu[5][0] __________________________________________________________________________________________________ concatenate_3 (Concatenate) (None, 64, 65) 0 max_pooling1d_5[0][0] leaky_re_lu[13][0] __________________________________________________________________________________________________ flatten_4 (Flatten) (None, 4160) 0 concatenate_3[0][0] __________________________________________________________________________________________________ r2_dense_1 (Dense) (None, 4096) 17043456 flatten_4[0][0] __________________________________________________________________________________________________ r2_dense_1_batchnorm (BatchNorm (None, 4096) 16384 r2_dense_1[0][0] __________________________________________________________________________________________________ r2_dense_2 (Dense) (None, 2048) 8390656 leaky_re_lu[14][0] __________________________________________________________________________________________________ r2_dense_2_batchnorm (BatchNorm (None, 2048) 8192 r2_dense_2[0][0] __________________________________________________________________________________________________ r2_dense_3 (Dense) (None, 1024) 2098176 leaky_re_lu[15][0] __________________________________________________________________________________________________ r2_dense_3_batchnorm (BatchNorm (None, 1024) 4096 r2_dense_3[0][0] __________________________________________________________________________________________________ JointVonMisesFisher_mean (Dense (None, 3) 3075 leaky_re_lu[16][0] __________________________________________________________________________________________________ JointVonMisesFisher_logvar (Den (None, 1) 1025 leaky_re_lu[16][0] __________________________________________________________________________________________________ concatenate_4 (Concatenate) (None, 4) 0 JointVonMisesFisher_mean[0][0] JointVonMisesFisher_logvar[0][0] ================================================================================================== Total params: 28,372,200 Trainable params: 28,357,030 Non-trainable params: 15,170 __________________________________________________________________________________________________