-
Notifications
You must be signed in to change notification settings - Fork 61
Description
Thanks for the excellent open-source work!
I have a question regarding the audio VAE in this project:
-
What is the purpose of the audio VAE?
From the dataset preparation part, it seems that the audio VAE is not involved in encoding or decoding the audio data. Could you clarify its role in the overall architecture or training pipeline?(https://github.com/FunAudioLLM/ThinkSound/blob/master/data_utils/extract_training_audio.py#L80) -
Why is the conventional VAE approach (mel-spectrograms → VAE → vocoders) not used?
It appears that the project does not follow the commonly used pipeline where audio is first converted into mel-spectrograms, then encoded by a VAE, and finally reconstructed by a vocoders. What were the considerations behind this design decision?
I would really appreciate it if you could provide some clarification on this. Thanks again for your great work!