Junwon Lee 이준원
AI researcher
M.S. Student @ Music and Audio Computing Lab (Prof. Juhan Nam).
Research Interest: Machine Learning, Music & Audio Information Retrieval, Multimodal Generation.
james39@kaist.ac.kr
News
- Apr, 2024 | (DCASE) Sound Scene Synthesis Challenge launched! (Text-To-Audio)Link
- Mar, 2024 | T-Foley code released!
- Dec, 2023 | One paper accepted at ICASSP 2024!
- | If you have any questions about joining our lab, please contact me via email :)
Publications
- All
- -
- Audio
- -
- Music
- -
- Language
- -
- Vision
- -
- Generation
- -
- Annotation and Retrieval
Correlation of Fr ́echet Audio Distance With Human Perception of Environmental Audio Is Embedding Dependant
Modan Tailleur*, Junwon Lee*, Mathieu Lagrange, Keunwoo Choi, Laurie M. Heller, Keisuke Imoto, and Yuki Okamoto (* equal contribution)
32nd European Signal Processing Conference (EUSIPCO), 2024 (Submitted)
#Generation #Audio
papercodefadtk (FAD toolkit)T-FOLEY: A Controllable Waveform-Domain Diffusion Model For Temporal-Event-Guided Foley Sound Synthesis
Yoonjin Chung*, Junwon Lee*, and Juhan Nam (* equal contribution)
Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2024
#Generation #Audio
paperdemocodeFoley Sound Synthesis In Waveform Domain With Diffusion Model
Yoonjin Chung, Junwon Lee, and Juhan Nam
DCASE 2023 Challenge Task 7 Foley Sound Synthesis Technical Report (15th, 1st model w/o phase reconstruction model)
#Generation #Audio
paperMusic Playlist Title Generation Using Artist Information
Haven Kim, Seungheon Doh, Junwon Lee, and Juhan Nam
AAAI-23 Workshop on Creative AI Across Modalities
#Generation #Language #Music
paperMusic Playlist Title Generation: A Machine-Translation Approach
Seungheon Doh, Junwon Lee, and Juhan Nam
2nd Workshop on Natural Language Processing for Music and Spoken Audio (NLP4MusA), 2021
#Generation #Language #Music
paper