Model: "encoder_r1" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_1 (InputLayer) [(None, 4096, 3)] 0 __________________________________________________________________________________________________ shared_conv_0 (Conv1D) (None, 4096, 128) 49280 input_1[0][0] __________________________________________________________________________________________________ shared_conv_0_batchnorm (BatchN (None, 4096, 128) 512 shared_conv_0[0][0] __________________________________________________________________________________________________ leaky_re_lu (LeakyReLU) multiple 0 shared_conv_0_batchnorm[0][0] shared_conv_1_batchnorm[0][0] shared_conv_2_batchnorm[0][0] shared_conv_3_batchnorm[0][0] shared_conv_4_batchnorm[0][0] shared_conv_5_batchnorm[0][0] shared_conv_6_batchnorm[0][0] shared_conv_7_batchnorm[0][0] shared_conv_8_batchnorm[0][0] shared_conv_9_batchnorm[0][0] shared_conv_10_batchnorm[0][0] shared_conv_11_batchnorm[0][0] __________________________________________________________________________________________________ shared_conv_1 (Conv1D) (None, 4096, 128) 1048704 leaky_re_lu[0][0] __________________________________________________________________________________________________ shared_conv_1_batchnorm (BatchN (None, 4096, 128) 512 shared_conv_1[0][0] __________________________________________________________________________________________________ max_pooling1d (MaxPooling1D) (None, 2048, 128) 0 leaky_re_lu[1][0] __________________________________________________________________________________________________ shared_conv_2 (Conv1D) (None, 2048, 64) 524352 max_pooling1d[0][0] __________________________________________________________________________________________________ shared_conv_2_batchnorm (BatchN (None, 2048, 64) 256 shared_conv_2[0][0] __________________________________________________________________________________________________ shared_conv_3 (Conv1D) (None, 2048, 64) 131136 leaky_re_lu[2][0] __________________________________________________________________________________________________ shared_conv_3_batchnorm (BatchN (None, 2048, 64) 256 shared_conv_3[0][0] __________________________________________________________________________________________________ max_pooling1d_1 (MaxPooling1D) (None, 1024, 64) 0 leaky_re_lu[3][0] __________________________________________________________________________________________________ shared_conv_4 (Conv1D) (None, 1024, 64) 131136 max_pooling1d_1[0][0] __________________________________________________________________________________________________ shared_conv_4_batchnorm (BatchN (None, 1024, 64) 256 shared_conv_4[0][0] __________________________________________________________________________________________________ shared_conv_5 (Conv1D) (None, 1024, 64) 65600 leaky_re_lu[4][0] __________________________________________________________________________________________________ shared_conv_5_batchnorm (BatchN (None, 1024, 64) 256 shared_conv_5[0][0] __________________________________________________________________________________________________ max_pooling1d_2 (MaxPooling1D) (None, 512, 64) 0 leaky_re_lu[5][0] __________________________________________________________________________________________________ shared_conv_6 (Conv1D) (None, 512, 32) 32800 max_pooling1d_2[0][0] __________________________________________________________________________________________________ shared_conv_6_batchnorm (BatchN (None, 512, 32) 128 shared_conv_6[0][0] __________________________________________________________________________________________________ shared_conv_7 (Conv1D) (None, 512, 32) 8224 leaky_re_lu[6][0] __________________________________________________________________________________________________ shared_conv_7_batchnorm (BatchN (None, 512, 32) 128 shared_conv_7[0][0] __________________________________________________________________________________________________ max_pooling1d_3 (MaxPooling1D) (None, 256, 32) 0 leaky_re_lu[7][0] __________________________________________________________________________________________________ shared_conv_8 (Conv1D) (None, 256, 32) 8224 max_pooling1d_3[0][0] __________________________________________________________________________________________________ shared_conv_8_batchnorm (BatchN (None, 256, 32) 128 shared_conv_8[0][0] __________________________________________________________________________________________________ shared_conv_9 (Conv1D) (None, 256, 32) 4128 leaky_re_lu[8][0] __________________________________________________________________________________________________ shared_conv_9_batchnorm (BatchN (None, 256, 32) 128 shared_conv_9[0][0] __________________________________________________________________________________________________ max_pooling1d_4 (MaxPooling1D) (None, 128, 32) 0 leaky_re_lu[9][0] __________________________________________________________________________________________________ shared_conv_10 (Conv1D) (None, 128, 32) 4128 max_pooling1d_4[0][0] __________________________________________________________________________________________________ shared_conv_10_batchnorm (Batch (None, 128, 32) 128 shared_conv_10[0][0] __________________________________________________________________________________________________ shared_conv_11 (Conv1D) (None, 128, 32) 4128 leaky_re_lu[10][0] __________________________________________________________________________________________________ shared_conv_11_batchnorm (Batch (None, 128, 32) 128 shared_conv_11[0][0] __________________________________________________________________________________________________ max_pooling1d_5 (MaxPooling1D) (None, 64, 32) 0 leaky_re_lu[11][0] __________________________________________________________________________________________________ flatten (Flatten) (None, 2048) 0 max_pooling1d_5[0][0] __________________________________________________________________________________________________ r1_dense_0 (Dense) (None, 1024) 2098176 flatten[0][0] __________________________________________________________________________________________________ r1_dense_0_batchnorm (BatchNorm (None, 1024) 4096 r1_dense_0[0][0] __________________________________________________________________________________________________ flatten_1 (Flatten) (None, 1024) 0 r1_dense_0_batchnorm[0][0] __________________________________________________________________________________________________ r1_dense_1 (Dense) (None, 512) 524800 flatten_1[0][0] __________________________________________________________________________________________________ r1_dense_1_batchnorm (BatchNorm (None, 512) 2048 r1_dense_1[0][0] __________________________________________________________________________________________________ flatten_2 (Flatten) (None, 512) 0 r1_dense_1_batchnorm[0][0] __________________________________________________________________________________________________ r1_dense_2 (Dense) (None, 512) 262656 flatten_2[0][0] __________________________________________________________________________________________________ r1_dense_2_batchnorm (BatchNorm (None, 512) 2048 r1_dense_2[0][0] __________________________________________________________________________________________________ r1_mean_dense (Dense) (None, 144) 73872 r1_dense_2_batchnorm[0][0] __________________________________________________________________________________________________ r1_logvar_dense (Dense) (None, 144) 73872 r1_dense_2_batchnorm[0][0] __________________________________________________________________________________________________ r1_modes_dense (Dense) (None, 12) 6156 r1_dense_2_batchnorm[0][0] __________________________________________________________________________________________________ concatenate (Concatenate) (None, 300) 0 r1_mean_dense[0][0] r1_logvar_dense[0][0] r1_modes_dense[0][0] ================================================================================================== Total params: 5,062,380 Trainable params: 5,056,876 Non-trainable params: 5,504 __________________________________________________________________________________________________ Model: "encoder_q" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_1 (InputLayer) [(None, 4096, 3)] 0 __________________________________________________________________________________________________ shared_conv_0 (Conv1D) (None, 4096, 128) 49280 input_1[0][0] __________________________________________________________________________________________________ shared_conv_0_batchnorm (BatchN (None, 4096, 128) 512 shared_conv_0[0][0] __________________________________________________________________________________________________ leaky_re_lu (LeakyReLU) multiple 0 shared_conv_0_batchnorm[0][0] shared_conv_1_batchnorm[0][0] shared_conv_2_batchnorm[0][0] shared_conv_3_batchnorm[0][0] shared_conv_4_batchnorm[0][0] shared_conv_5_batchnorm[0][0] shared_conv_6_batchnorm[0][0] shared_conv_7_batchnorm[0][0] shared_conv_8_batchnorm[0][0] shared_conv_9_batchnorm[0][0] shared_conv_10_batchnorm[0][0] shared_conv_11_batchnorm[0][0] __________________________________________________________________________________________________ shared_conv_1 (Conv1D) (None, 4096, 128) 1048704 leaky_re_lu[0][0] __________________________________________________________________________________________________ shared_conv_1_batchnorm (BatchN (None, 4096, 128) 512 shared_conv_1[0][0] __________________________________________________________________________________________________ max_pooling1d (MaxPooling1D) (None, 2048, 128) 0 leaky_re_lu[1][0] __________________________________________________________________________________________________ shared_conv_2 (Conv1D) (None, 2048, 64) 524352 max_pooling1d[0][0] __________________________________________________________________________________________________ shared_conv_2_batchnorm (BatchN (None, 2048, 64) 256 shared_conv_2[0][0] __________________________________________________________________________________________________ shared_conv_3 (Conv1D) (None, 2048, 64) 131136 leaky_re_lu[2][0] __________________________________________________________________________________________________ shared_conv_3_batchnorm (BatchN (None, 2048, 64) 256 shared_conv_3[0][0] __________________________________________________________________________________________________ max_pooling1d_1 (MaxPooling1D) (None, 1024, 64) 0 leaky_re_lu[3][0] __________________________________________________________________________________________________ shared_conv_4 (Conv1D) (None, 1024, 64) 131136 max_pooling1d_1[0][0] __________________________________________________________________________________________________ shared_conv_4_batchnorm (BatchN (None, 1024, 64) 256 shared_conv_4[0][0] __________________________________________________________________________________________________ shared_conv_5 (Conv1D) (None, 1024, 64) 65600 leaky_re_lu[4][0] __________________________________________________________________________________________________ shared_conv_5_batchnorm (BatchN (None, 1024, 64) 256 shared_conv_5[0][0] __________________________________________________________________________________________________ max_pooling1d_2 (MaxPooling1D) (None, 512, 64) 0 leaky_re_lu[5][0] __________________________________________________________________________________________________ shared_conv_6 (Conv1D) (None, 512, 32) 32800 max_pooling1d_2[0][0] __________________________________________________________________________________________________ shared_conv_6_batchnorm (BatchN (None, 512, 32) 128 shared_conv_6[0][0] __________________________________________________________________________________________________ shared_conv_7 (Conv1D) (None, 512, 32) 8224 leaky_re_lu[6][0] __________________________________________________________________________________________________ shared_conv_7_batchnorm (BatchN (None, 512, 32) 128 shared_conv_7[0][0] __________________________________________________________________________________________________ max_pooling1d_3 (MaxPooling1D) (None, 256, 32) 0 leaky_re_lu[7][0] __________________________________________________________________________________________________ shared_conv_8 (Conv1D) (None, 256, 32) 8224 max_pooling1d_3[0][0] __________________________________________________________________________________________________ shared_conv_8_batchnorm (BatchN (None, 256, 32) 128 shared_conv_8[0][0] __________________________________________________________________________________________________ shared_conv_9 (Conv1D) (None, 256, 32) 4128 leaky_re_lu[8][0] __________________________________________________________________________________________________ shared_conv_9_batchnorm (BatchN (None, 256, 32) 128 shared_conv_9[0][0] __________________________________________________________________________________________________ max_pooling1d_4 (MaxPooling1D) (None, 128, 32) 0 leaky_re_lu[9][0] __________________________________________________________________________________________________ shared_conv_10 (Conv1D) (None, 128, 32) 4128 max_pooling1d_4[0][0] __________________________________________________________________________________________________ shared_conv_10_batchnorm (Batch (None, 128, 32) 128 shared_conv_10[0][0] __________________________________________________________________________________________________ shared_conv_11 (Conv1D) (None, 128, 32) 4128 leaky_re_lu[10][0] __________________________________________________________________________________________________ shared_conv_11_batchnorm (Batch (None, 128, 32) 128 shared_conv_11[0][0] __________________________________________________________________________________________________ max_pooling1d_5 (MaxPooling1D) (None, 64, 32) 0 leaky_re_lu[11][0] __________________________________________________________________________________________________ input_2 (InputLayer) [(None, 15)] 0 __________________________________________________________________________________________________ flatten_4 (Flatten) (None, 2048) 0 max_pooling1d_5[0][0] __________________________________________________________________________________________________ flatten_3 (Flatten) (None, 15) 0 input_2[0][0] __________________________________________________________________________________________________ concatenate_1 (Concatenate) (None, 2063) 0 flatten_4[0][0] flatten_3[0][0] __________________________________________________________________________________________________ flatten_5 (Flatten) (None, 2063) 0 concatenate_1[0][0] __________________________________________________________________________________________________ q_dense_0 (Dense) (None, 1024) 2113536 flatten_5[0][0] __________________________________________________________________________________________________ q_dense_0_batchnorm (BatchNorma (None, 1024) 4096 q_dense_0[0][0] __________________________________________________________________________________________________ flatten_6 (Flatten) (None, 1024) 0 q_dense_0_batchnorm[0][0] __________________________________________________________________________________________________ q_dense_1 (Dense) (None, 512) 524800 flatten_6[0][0] __________________________________________________________________________________________________ q_dense_1_batchnorm (BatchNorma (None, 512) 2048 q_dense_1[0][0] __________________________________________________________________________________________________ flatten_7 (Flatten) (None, 512) 0 q_dense_1_batchnorm[0][0] __________________________________________________________________________________________________ q_dense_2 (Dense) (None, 512) 262656 flatten_7[0][0] __________________________________________________________________________________________________ q_dense_2_batchnorm (BatchNorma (None, 512) 2048 q_dense_2[0][0] __________________________________________________________________________________________________ q_mean_dense (Dense) (None, 12) 6156 q_dense_2_batchnorm[0][0] __________________________________________________________________________________________________ q_logvar_dense (Dense) (None, 12) 6156 q_dense_2_batchnorm[0][0] __________________________________________________________________________________________________ concatenate_2 (Concatenate) (None, 24) 0 q_mean_dense[0][0] q_logvar_dense[0][0] ================================================================================================== Total params: 4,936,152 Trainable params: 4,930,648 Non-trainable params: 5,504 __________________________________________________________________________________________________ Model: "decoder_r2" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_1 (InputLayer) [(None, 4096, 3)] 0 __________________________________________________________________________________________________ shared_conv_0 (Conv1D) (None, 4096, 128) 49280 input_1[0][0] __________________________________________________________________________________________________ shared_conv_0_batchnorm (BatchN (None, 4096, 128) 512 shared_conv_0[0][0] __________________________________________________________________________________________________ leaky_re_lu (LeakyReLU) multiple 0 shared_conv_0_batchnorm[0][0] shared_conv_1_batchnorm[0][0] shared_conv_2_batchnorm[0][0] shared_conv_3_batchnorm[0][0] shared_conv_4_batchnorm[0][0] shared_conv_5_batchnorm[0][0] shared_conv_6_batchnorm[0][0] shared_conv_7_batchnorm[0][0] shared_conv_8_batchnorm[0][0] shared_conv_9_batchnorm[0][0] shared_conv_10_batchnorm[0][0] shared_conv_11_batchnorm[0][0] __________________________________________________________________________________________________ shared_conv_1 (Conv1D) (None, 4096, 128) 1048704 leaky_re_lu[0][0] __________________________________________________________________________________________________ shared_conv_1_batchnorm (BatchN (None, 4096, 128) 512 shared_conv_1[0][0] __________________________________________________________________________________________________ max_pooling1d (MaxPooling1D) (None, 2048, 128) 0 leaky_re_lu[1][0] __________________________________________________________________________________________________ shared_conv_2 (Conv1D) (None, 2048, 64) 524352 max_pooling1d[0][0] __________________________________________________________________________________________________ shared_conv_2_batchnorm (BatchN (None, 2048, 64) 256 shared_conv_2[0][0] __________________________________________________________________________________________________ shared_conv_3 (Conv1D) (None, 2048, 64) 131136 leaky_re_lu[2][0] __________________________________________________________________________________________________ shared_conv_3_batchnorm (BatchN (None, 2048, 64) 256 shared_conv_3[0][0] __________________________________________________________________________________________________ max_pooling1d_1 (MaxPooling1D) (None, 1024, 64) 0 leaky_re_lu[3][0] __________________________________________________________________________________________________ shared_conv_4 (Conv1D) (None, 1024, 64) 131136 max_pooling1d_1[0][0] __________________________________________________________________________________________________ shared_conv_4_batchnorm (BatchN (None, 1024, 64) 256 shared_conv_4[0][0] __________________________________________________________________________________________________ shared_conv_5 (Conv1D) (None, 1024, 64) 65600 leaky_re_lu[4][0] __________________________________________________________________________________________________ shared_conv_5_batchnorm (BatchN (None, 1024, 64) 256 shared_conv_5[0][0] __________________________________________________________________________________________________ max_pooling1d_2 (MaxPooling1D) (None, 512, 64) 0 leaky_re_lu[5][0] __________________________________________________________________________________________________ shared_conv_6 (Conv1D) (None, 512, 32) 32800 max_pooling1d_2[0][0] __________________________________________________________________________________________________ shared_conv_6_batchnorm (BatchN (None, 512, 32) 128 shared_conv_6[0][0] __________________________________________________________________________________________________ shared_conv_7 (Conv1D) (None, 512, 32) 8224 leaky_re_lu[6][0] __________________________________________________________________________________________________ shared_conv_7_batchnorm (BatchN (None, 512, 32) 128 shared_conv_7[0][0] __________________________________________________________________________________________________ max_pooling1d_3 (MaxPooling1D) (None, 256, 32) 0 leaky_re_lu[7][0] __________________________________________________________________________________________________ shared_conv_8 (Conv1D) (None, 256, 32) 8224 max_pooling1d_3[0][0] __________________________________________________________________________________________________ shared_conv_8_batchnorm (BatchN (None, 256, 32) 128 shared_conv_8[0][0] __________________________________________________________________________________________________ shared_conv_9 (Conv1D) (None, 256, 32) 4128 leaky_re_lu[8][0] __________________________________________________________________________________________________ shared_conv_9_batchnorm (BatchN (None, 256, 32) 128 shared_conv_9[0][0] __________________________________________________________________________________________________ max_pooling1d_4 (MaxPooling1D) (None, 128, 32) 0 leaky_re_lu[9][0] __________________________________________________________________________________________________ shared_conv_10 (Conv1D) (None, 128, 32) 4128 max_pooling1d_4[0][0] __________________________________________________________________________________________________ shared_conv_10_batchnorm (Batch (None, 128, 32) 128 shared_conv_10[0][0] __________________________________________________________________________________________________ shared_conv_11 (Conv1D) (None, 128, 32) 4128 leaky_re_lu[10][0] __________________________________________________________________________________________________ shared_conv_11_batchnorm (Batch (None, 128, 32) 128 shared_conv_11[0][0] __________________________________________________________________________________________________ max_pooling1d_5 (MaxPooling1D) (None, 64, 32) 0 leaky_re_lu[11][0] __________________________________________________________________________________________________ input_3 (InputLayer) [(None, 12)] 0 __________________________________________________________________________________________________ flatten_9 (Flatten) (None, 2048) 0 max_pooling1d_5[0][0] __________________________________________________________________________________________________ flatten_8 (Flatten) (None, 12) 0 input_3[0][0] __________________________________________________________________________________________________ concatenate_3 (Concatenate) (None, 2060) 0 flatten_9[0][0] flatten_8[0][0] __________________________________________________________________________________________________ flatten_10 (Flatten) (None, 2060) 0 concatenate_3[0][0] __________________________________________________________________________________________________ r2_dense_0 (Dense) (None, 2048) 4220928 flatten_10[0][0] __________________________________________________________________________________________________ r2_dense_0_batchnorm (BatchNorm (None, 2048) 8192 r2_dense_0[0][0] __________________________________________________________________________________________________ flatten_11 (Flatten) (None, 2048) 0 r2_dense_0_batchnorm[0][0] __________________________________________________________________________________________________ r2_dense_1 (Dense) (None, 2048) 4196352 flatten_11[0][0] __________________________________________________________________________________________________ r2_dense_1_batchnorm (BatchNorm (None, 2048) 8192 r2_dense_1[0][0] __________________________________________________________________________________________________ flatten_12 (Flatten) (None, 2048) 0 r2_dense_1_batchnorm[0][0] __________________________________________________________________________________________________ r2_dense_2 (Dense) (None, 1024) 2098176 flatten_12[0][0] __________________________________________________________________________________________________ r2_dense_2_batchnorm (BatchNorm (None, 1024) 4096 r2_dense_2[0][0] __________________________________________________________________________________________________ JointNormalM1M2diffsumMM_mean ( (None, 12) 12300 r2_dense_2_batchnorm[0][0] __________________________________________________________________________________________________ JointNormalM1M2diffsumMM_logsca (None, 12) 12300 r2_dense_2_batchnorm[0][0] __________________________________________________________________________________________________ JointNormalM1M2diffsumMM_logwei (None, 12) 12300 r2_dense_2_batchnorm[0][0] __________________________________________________________________________________________________ VonMisesMM_mean (Dense) (None, 64) 65536 r2_dense_2_batchnorm[0][0] __________________________________________________________________________________________________ VonMisesMM_logvar (Dense) (None, 32) 32800 r2_dense_2_batchnorm[0][0] __________________________________________________________________________________________________ VonMisesMM_logweight (Dense) (None, 32) 32800 r2_dense_2_batchnorm[0][0] __________________________________________________________________________________________________ NormalMM_mean (Dense) (None, 56) 57344 r2_dense_2_batchnorm[0][0] __________________________________________________________________________________________________ NormalMM_logvar (Dense) (None, 56) 57400 r2_dense_2_batchnorm[0][0] __________________________________________________________________________________________________ NormalMM_logweight (Dense) (None, 56) 57400 r2_dense_2_batchnorm[0][0] __________________________________________________________________________________________________ JointVonMisesFisherMM_mean (Den (None, 24) 24576 r2_dense_2_batchnorm[0][0] __________________________________________________________________________________________________ JointVonMisesFisherMM_logvar (D (None, 8) 8200 r2_dense_2_batchnorm[0][0] __________________________________________________________________________________________________ JointVonMisesFisherMM_logweight (None, 8) 8200 r2_dense_2_batchnorm[0][0] __________________________________________________________________________________________________ concatenate_4 (Concatenate) (None, 372) 0 JointNormalM1M2diffsumMM_mean[0][ JointNormalM1M2diffsumMM_logscale JointNormalM1M2diffsumMM_logweigh VonMisesMM_mean[0][0] VonMisesMM_logvar[0][0] VonMisesMM_logweight[0][0] NormalMM_mean[0][0] NormalMM_logvar[0][0] NormalMM_logweight[0][0] JointVonMisesFisherMM_mean[0][0] JointVonMisesFisherMM_logvar[0][0 JointVonMisesFisherMM_logweight[0 ================================================================================================== Total params: 12,931,748 Trainable params: 12,920,100 Non-trainable params: 11,648 __________________________________________________________________________________________________