Model: "encoder_r1" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_1 (InputLayer) [(None, 1024, 3)] 0 __________________________________________________________________________________________________ shared_conv_0 (Conv1D) (None, 512, 96) 18528 input_1[0][0] __________________________________________________________________________________________________ shared_conv_0_batchnorm (BatchN (None, 512, 96) 384 shared_conv_0[0][0] __________________________________________________________________________________________________ leaky_re_lu (LeakyReLU) multiple 0 shared_conv_0_batchnorm[0][0] shared_conv_1_batchnorm[0][0] shared_conv_2_batchnorm[0][0] shared_conv_3_batchnorm[0][0] shared_conv_4_batchnorm[0][0] __________________________________________________________________________________________________ shared_conv_1 (Conv1D) (None, 256, 64) 393280 leaky_re_lu[0][0] __________________________________________________________________________________________________ shared_conv_1_batchnorm (BatchN (None, 256, 64) 256 shared_conv_1[0][0] __________________________________________________________________________________________________ shared_conv_2 (Conv1D) (None, 128, 32) 65568 leaky_re_lu[1][0] __________________________________________________________________________________________________ shared_conv_2_batchnorm (BatchN (None, 128, 32) 128 shared_conv_2[0][0] __________________________________________________________________________________________________ shared_conv_3 (Conv1D) (None, 64, 16) 16400 leaky_re_lu[2][0] __________________________________________________________________________________________________ shared_conv_3_batchnorm (BatchN (None, 64, 16) 64 shared_conv_3[0][0] __________________________________________________________________________________________________ shared_conv_4 (Conv1D) (None, 32, 16) 4112 leaky_re_lu[3][0] __________________________________________________________________________________________________ shared_conv_4_batchnorm (BatchN (None, 32, 16) 64 shared_conv_4[0][0] __________________________________________________________________________________________________ flatten (Flatten) (None, 512) 0 leaky_re_lu[4][0] __________________________________________________________________________________________________ r1_dense_0 (Dense) (None, 4096) 2101248 flatten[0][0] __________________________________________________________________________________________________ r1_dense_0_batchnorm (BatchNorm (None, 4096) 16384 r1_dense_0[0][0] __________________________________________________________________________________________________ r1_dense_1 (Dense) (None, 2048) 8390656 r1_dense_0_batchnorm[0][0] __________________________________________________________________________________________________ r1_dense_1_batchnorm (BatchNorm (None, 2048) 8192 r1_dense_1[0][0] __________________________________________________________________________________________________ r1_dense_2 (Dense) (None, 1024) 2098176 r1_dense_1_batchnorm[0][0] __________________________________________________________________________________________________ r1_dense_2_batchnorm (BatchNorm (None, 1024) 4096 r1_dense_2[0][0] __________________________________________________________________________________________________ r1_mean_dense (Dense) (None, 1024) 1049600 r1_dense_2_batchnorm[0][0] __________________________________________________________________________________________________ r1_logvar_dense (Dense) (None, 1024) 1049600 r1_dense_2_batchnorm[0][0] __________________________________________________________________________________________________ r1_modes_dense (Dense) (None, 32) 32800 r1_dense_2_batchnorm[0][0] __________________________________________________________________________________________________ concatenate (Concatenate) (None, 2080) 0 r1_mean_dense[0][0] r1_logvar_dense[0][0] r1_modes_dense[0][0] ================================================================================================== Total params: 15,249,536 Trainable params: 15,234,752 Non-trainable params: 14,784 __________________________________________________________________________________________________ Model: "encoder_q" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_1 (InputLayer) [(None, 1024, 3)] 0 __________________________________________________________________________________________________ shared_conv_0 (Conv1D) (None, 512, 96) 18528 input_1[0][0] __________________________________________________________________________________________________ shared_conv_0_batchnorm (BatchN (None, 512, 96) 384 shared_conv_0[0][0] __________________________________________________________________________________________________ leaky_re_lu (LeakyReLU) multiple 0 shared_conv_0_batchnorm[0][0] shared_conv_1_batchnorm[0][0] shared_conv_2_batchnorm[0][0] shared_conv_3_batchnorm[0][0] shared_conv_4_batchnorm[0][0] __________________________________________________________________________________________________ shared_conv_1 (Conv1D) (None, 256, 64) 393280 leaky_re_lu[0][0] __________________________________________________________________________________________________ shared_conv_1_batchnorm (BatchN (None, 256, 64) 256 shared_conv_1[0][0] __________________________________________________________________________________________________ shared_conv_2 (Conv1D) (None, 128, 32) 65568 leaky_re_lu[1][0] __________________________________________________________________________________________________ shared_conv_2_batchnorm (BatchN (None, 128, 32) 128 shared_conv_2[0][0] __________________________________________________________________________________________________ shared_conv_3 (Conv1D) (None, 64, 16) 16400 leaky_re_lu[2][0] __________________________________________________________________________________________________ shared_conv_3_batchnorm (BatchN (None, 64, 16) 64 shared_conv_3[0][0] __________________________________________________________________________________________________ shared_conv_4 (Conv1D) (None, 32, 16) 4112 leaky_re_lu[3][0] __________________________________________________________________________________________________ shared_conv_4_batchnorm (BatchN (None, 32, 16) 64 shared_conv_4[0][0] __________________________________________________________________________________________________ input_2 (InputLayer) [(None, 15)] 0 __________________________________________________________________________________________________ flatten (Flatten) (None, 512) 0 leaky_re_lu[4][0] __________________________________________________________________________________________________ flatten_1 (Flatten) (None, 15) 0 input_2[0][0] __________________________________________________________________________________________________ concatenate_1 (Concatenate) (None, 527) 0 flatten[0][0] flatten_1[0][0] __________________________________________________________________________________________________ q_dense_0 (Dense) (None, 4096) 2162688 concatenate_1[0][0] __________________________________________________________________________________________________ q_dense_0_batchnorm (BatchNorma (None, 4096) 16384 q_dense_0[0][0] __________________________________________________________________________________________________ q_dense_1 (Dense) (None, 2048) 8390656 q_dense_0_batchnorm[0][0] __________________________________________________________________________________________________ q_dense_1_batchnorm (BatchNorma (None, 2048) 8192 q_dense_1[0][0] __________________________________________________________________________________________________ q_dense_2 (Dense) (None, 1024) 2098176 q_dense_1_batchnorm[0][0] __________________________________________________________________________________________________ q_dense_2_batchnorm (BatchNorma (None, 1024) 4096 q_dense_2[0][0] __________________________________________________________________________________________________ q_mean_dense (Dense) (None, 32) 32800 q_dense_2_batchnorm[0][0] __________________________________________________________________________________________________ q_logvar_dense (Dense) (None, 32) 32800 q_dense_2_batchnorm[0][0] __________________________________________________________________________________________________ concatenate_2 (Concatenate) (None, 64) 0 q_mean_dense[0][0] q_logvar_dense[0][0] ================================================================================================== Total params: 13,244,576 Trainable params: 13,229,792 Non-trainable params: 14,784 __________________________________________________________________________________________________ Model: "decoder_r2" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_1 (InputLayer) [(None, 1024, 3)] 0 __________________________________________________________________________________________________ shared_conv_0 (Conv1D) (None, 512, 96) 18528 input_1[0][0] __________________________________________________________________________________________________ shared_conv_0_batchnorm (BatchN (None, 512, 96) 384 shared_conv_0[0][0] __________________________________________________________________________________________________ leaky_re_lu (LeakyReLU) multiple 0 shared_conv_0_batchnorm[0][0] shared_conv_1_batchnorm[0][0] shared_conv_2_batchnorm[0][0] shared_conv_3_batchnorm[0][0] shared_conv_4_batchnorm[0][0] __________________________________________________________________________________________________ shared_conv_1 (Conv1D) (None, 256, 64) 393280 leaky_re_lu[0][0] __________________________________________________________________________________________________ shared_conv_1_batchnorm (BatchN (None, 256, 64) 256 shared_conv_1[0][0] __________________________________________________________________________________________________ shared_conv_2 (Conv1D) (None, 128, 32) 65568 leaky_re_lu[1][0] __________________________________________________________________________________________________ shared_conv_2_batchnorm (BatchN (None, 128, 32) 128 shared_conv_2[0][0] __________________________________________________________________________________________________ shared_conv_3 (Conv1D) (None, 64, 16) 16400 leaky_re_lu[2][0] __________________________________________________________________________________________________ shared_conv_3_batchnorm (BatchN (None, 64, 16) 64 shared_conv_3[0][0] __________________________________________________________________________________________________ shared_conv_4 (Conv1D) (None, 32, 16) 4112 leaky_re_lu[3][0] __________________________________________________________________________________________________ shared_conv_4_batchnorm (BatchN (None, 32, 16) 64 shared_conv_4[0][0] __________________________________________________________________________________________________ input_3 (InputLayer) [(None, 32)] 0 __________________________________________________________________________________________________ flatten (Flatten) (None, 512) 0 leaky_re_lu[4][0] __________________________________________________________________________________________________ flatten_2 (Flatten) (None, 32) 0 input_3[0][0] __________________________________________________________________________________________________ concatenate_3 (Concatenate) (None, 544) 0 flatten[0][0] flatten_2[0][0] __________________________________________________________________________________________________ r2_dense_0 (Dense) (None, 4096) 2232320 concatenate_3[0][0] __________________________________________________________________________________________________ r2_dense_0_batchnorm (BatchNorm (None, 4096) 16384 r2_dense_0[0][0] __________________________________________________________________________________________________ r2_dense_1 (Dense) (None, 2048) 8390656 r2_dense_0_batchnorm[0][0] __________________________________________________________________________________________________ r2_dense_1_batchnorm (BatchNorm (None, 2048) 8192 r2_dense_1[0][0] __________________________________________________________________________________________________ r2_dense_2 (Dense) (None, 1024) 2098176 r2_dense_1_batchnorm[0][0] __________________________________________________________________________________________________ r2_dense_2_batchnorm (BatchNorm (None, 1024) 4096 r2_dense_2[0][0] __________________________________________________________________________________________________ JointM1M2_mean (Dense) (None, 2) 2050 r2_dense_2_batchnorm[0][0] __________________________________________________________________________________________________ JointM1M2_logvar (Dense) (None, 2) 2050 r2_dense_2_batchnorm[0][0] __________________________________________________________________________________________________ VonMises_mean (Dense) (None, 10) 10250 r2_dense_2_batchnorm[0][0] __________________________________________________________________________________________________ VonMises_logvar (Dense) (None, 5) 5125 r2_dense_2_batchnorm[0][0] __________________________________________________________________________________________________ TruncatedNormal_mean (Dense) (None, 6) 6150 r2_dense_2_batchnorm[0][0] __________________________________________________________________________________________________ TruncatedNormal_logvar (Dense) (None, 6) 6150 r2_dense_2_batchnorm[0][0] __________________________________________________________________________________________________ JointVonMisesFisher_mean (Dense (None, 3) 3075 r2_dense_2_batchnorm[0][0] __________________________________________________________________________________________________ JointVonMisesFisher_logvar (Den (None, 1) 1025 r2_dense_2_batchnorm[0][0] __________________________________________________________________________________________________ concatenate_4 (Concatenate) (None, 35) 0 JointM1M2_mean[0][0] JointM1M2_logvar[0][0] VonMises_mean[0][0] VonMises_logvar[0][0] TruncatedNormal_mean[0][0] TruncatedNormal_logvar[0][0] JointVonMisesFisher_mean[0][0] JointVonMisesFisher_logvar[0][0] ================================================================================================== Total params: 13,284,483 Trainable params: 13,269,699 Non-trainable params: 14,784 __________________________________________________________________________________________________