Show simple item record

dc.contributor.advisorErzin, Engin
dc.contributor.advisorTekalp, Ahmet Murat
dc.contributor.advisorYemez, Yücel
dc.contributor.authorDemir, Yasemin
dc.date.accessioned2020-12-08T08:08:33Z
dc.date.available2020-12-08T08:08:33Z
dc.date.submitted2008
dc.date.issued2018-08-06
dc.identifier.urihttps://acikbilim.yok.gov.tr/handle/20.500.12812/170638
dc.description.abstractBu tezde coklu modelli dans performans analizi ile muzikle surulen dans sentezininyapılabilmesi amaclanmıstır. Dans ? gurleri, muzik harmonisi ile uyumlu, muzik ritmiyleeszamanlidir ve isitsel oznitelikler kullanılarak analiz edilebilir. Dans ? gurleriyle ilintiliisitsel oruntuleri belirlemek amacıyla, isitsel veri spektrumuna ait oznitelikler, SMM yapılarıkullanılarak modellenmistir. Elde edilen isitsel oruntuler ve dans ? gurleri arasındaki ilinti,eszamanlı gerceklesme performansları hesaplanarak de gerlendirilmistir. Isitsel veri olarak mel frekansı sepstral katsayıları ve renksel parlaklıgı temsil eden kroma oznitelikleri elealınmıstır. Bu tezde sunulan sistem kullanılarak, isitsel veriyle surulen vucut animasyonusentezi calısmalarının gelistirilmesi amaclanmaktadır.
dc.description.abstractWe present a framework for audio-visual analysis of dance performances towards the goalof music-driven dance synthesis. Dance ? gures, which are performed synchronously with themusical rhythm, can be analyzed through the audio spectra using spectral and chromaticmusical features. In the proposed multimodal dance performance analysis system, dance? gures are manually labeled over the video stream and modeled by employing HMMs.The music segments, which correspond to beat and meter boundaries, are used to trainhidden Markov model (HMM) structures to learn meter related temporal audio patternswhich are correlated with the dance ? gures. Bi-gram based co-occurences of temporal audiopatterns and dance ? gures are calculated. and bi-gram based co-occurrence performancesfor two different audio feature streams are evaluated. In our evaluations, mel-scale cepstralcoeffcients (MFCC) with their ? rst and second derivatives and chroma features are usedas our candidate audio feature set. The proposed framework in this thesis, can be usedtowards analysis and synthesis of audio-driven human body animation.en_US
dc.languageEnglish
dc.language.isoen
dc.rightsinfo:eu-repo/semantics/openAccess
dc.rightsAttribution 4.0 United Statestr_TR
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.subjectElektrik ve Elektronik Mühendisliğitr_TR
dc.subjectElectrical and Electronics Engineeringen_US
dc.titleMusic-driven dance synthesis by multimodal dance performance analysis
dc.title.alternativeÇoklu model dans performans analizi ile müzikle sürülen dans sentezinin yapılması
dc.typemasterThesis
dc.date.updated2018-08-06
dc.contributor.departmentElektrik ve Bilgisayar Mühendisliği Anabilim Dalı
dc.identifier.yokid341403
dc.publisher.instituteFen Bilimleri Enstitüsü
dc.publisher.universityKOÇ ÜNİVERSİTESİ
dc.identifier.thesisid246831
dc.description.pages54
dc.publisher.disciplineDiğer


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

info:eu-repo/semantics/openAccess
Except where otherwise noted, this item's license is described as info:eu-repo/semantics/openAccess