ENUnfortunately the webpage is not available in the language you have selected.

Prof. Dr. Felix Burkhardt

Prof. Dr.

Felix Burkhardt

FGL Strukturprofessur


+49 (0)30 314-70402

Einrichtung Fachgebiet Kommunikationswissenschaft
Sekretariat HBS 9
Gebäude HBS
Raum HBS 416
Adresse Hardenbergstr. 16-18
10623 Berlin
SprechzeitenMi 16-17 Uhr

Wissenschaftlicher Werdegang

Prof. Dr. Felix Burkhardt ist seit 1. Oktober 2020 als Vertretungsprofessor an der TU Berlin und leitet dort das Fachgebiet für Kommunikationswissenschaft am Institut für Sprache und Kommunikation.

Zusätzlich ist er seit September 2018 Forschungsleiter bei der audEERING GmbH, einer Firma die sich mit maschinellen Audioanalysen beschäftigt.

Nach einem Studium und Dissertation an der TU Berlin als Kommunkationswissenschaftler und Informatiker arbeitete er dort zunächst als wissenschaftlicher Mitarbeiter in DFG-Projekten. Von 2000 bis 2018 war er bei der T-Systems und Deutsche Telekom AG als Sprachtechnologieexperte angestellt. Außerdem ist er unregelmäßig Fachgutachter bei Konferenzen, Editor beim World Wide Web Consortium und als Gutachter bei der EU Horizon 2020 Vergabe beschäftigt.

Publikationen von Prof. Dr. Felix Burkhardt

  • F. Burkhardt, Markus Brückl and Björn Schuller: Age Classification: Comparison of Human vs Machine Performance in Prompted and Spontaneous Speech, Proc. ESSV, 2021, PDF
  • Benjamin Weiss, Jürgen Trouvain and F. Burkhardt: Acoustic Correlates of Likable Speakers in the NSC Database, in book: Voice Attractiveness, Studies on Sexy, Likable, and Charismatic Speakers, DOI: 10.1007/978-981-15-6627-1_13, 2020
  • F. Burkhardt, Milenko Saponja, Julian Sessner and Benjamin Weiss: How should Pepper Sound - Preliminary Investigations on Robot Vocalizations, Proc. of the ESSV 2019, 2019, PDF
  • F. Burkhardt and Benjamin Weiss: Speech Synthesizing Simultaneous Emotion-Related States, Proc. of the Specom 2018, 2018, PDF
  • Alice Baird, Emilia Parada-Cabaleiro, Simone Hantke, Felix Burkhardt, Nicholas Cummins and Björn Schuller: The Perception and Analysis of the Likeability and Human Likeness of Synthesized Speech, Proc. Interspeech, 2018
  • F. Burkhardt, Alexandra Steinhilber and Benjamin Weiss: Ironic Speech - Evaluating Acoustic Correlates by Means of Speech Synthesis, Proc. of the ESSV 2018, 2018, PDF
  • F. Burkhardt, Benjamin Weiss, Florian Eyben, Jun Deng and Björn Schuller: Detecting Vocal Irony, GSCL, Proc. Language Technologies for the Challenges of the Digital Age, Lecture Notes in Computer Science book series (LNCS, volume 10713), Lecture Notes in Artificial Intelligence book sub series (LNAI, volume 10713), 2017, PDF
  • Jun Deng, Florian Eyben, Björn Schuller and Felix Burkhardt: Deep neural networks for anger detection from real life speech data, Proc. Conference: 2017 Seventh International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW), Oct. 2017
  • F. Burkhardt C. Pelachaud, B. Schuller and E. Zovato: Emotion Markup Language, in Multimodal Interaction with W3C Standards: Towards Natural User Interfaces to Everything, Editor: Deborah A. Dahl, to be published by Springer, 2016
  • F. Burkhardt and U. Reichel: A Taxonomy for Specific Problem Classes in Text-to-Speech Synthesis Comparing Commercial and Open Source Perfomance, LREC, 2016, PDF
  • J. F. Sanchez-Rada, B. Schuller, V. Patti, P. Buitelaar, G. Vulcu, F. Burkhardt, C. Clavel, M. Petychakis, C. A. Iglesias: Towards a Common Linked Data Model for Sentiment and Emotion Analysis. Proceedings of the 6th International Workshop on Emotion and Sentiment Analysis workshop at LREC, 2016
  • F. Burkhardt: QUARK: Architecture for a Question Answering Machine, ESSV, Elektronische Sprachsignalverarbeitung, 2016, PDF
  • F. Burkhardt: Evaluating Commercial and Open-Source Text-to-Speech Synthesis Considering specific Problem Classes, ESSV, Elektronische Sprachsignalverarbeitung, 2015, PDF
  • B. Schuller, S. Steidl, A. Batliner, E. Nöth, A. Vinciarelli, F. Burkhardt, R. van Son, F. Weninger, F. Eyben; T. Bocklet; G. Mohammadi; B. Weiss: A Survey on Perceived Speaker Traits: Personality, Likability, Pathology, and the First Challenge. Computer Speech & Language, Elsevier, 2014
  • F. Burkhardt, C. Becker-Asano, E. Begoli, R. Cowie, G. Fobe, P. Gebhard, A. Kazemzadeh, I. Steiner, T. Llewellyn: Application of EmotionML. Proceedings of the 5th International Workshop on Emotion, Sentiment, Social Signals and Linked Open Data (ES3LOD), 2014, PDF
  • F. Burkhardt, N. Campbell: Emotional Speech Synthesis. In R.A. Calvo, S.K. D'Mello, J. Gratch and A. Kappas (Eds). Handbook of Affective Computing. Oxford University Press, 2014
  • Marc Schröder, Paolo Baggia, Felix Burkhardt, Catherine Pelachaud, Christian Peter, Enrico Zovato: Emotion Mark up Language. In R.A. Calvo, S.K. D'Mello, J. Gratch and A. Kappas (Eds). Handbook of Affective Computing. Oxford University Press, 2014
  • F. Burkhardt: Voice Search in Mobile Applications with the Rootvole framework, Interspeech Lyon, 2013, PDF
  • F. Burkhardt, H.U. Nägeli: Voice Search in Mobile Applications and the Use of Linked Open Data, Interspeech Lyon, 2013, PDF
  • F. Burkhardt, J. Zhou, S. Seide, T. Scheerbarth, B. Jäkel and T. Buchner: Voice enabling the AutoScout24 car search app, ESSV, Elektronische Sprachsignalverarbeitung, 2013, PDF
  • Björn Schuller, Stefan Steidl, Anton Batliner, Felix Burkhardt, Laurence Devilliers, Christian Müller and Shrikanth Narayanan: Paralinguistics in speech and language-State-of-the-art and the challenge, Computer Speech and Language, 2012, PDF
  • Ina Wechsung, Kathrin Jepsen, Felix Burkhardt, Annerose Köhler and Robert Schleicher: View from a Distance: Comparing Online and Retrospective UX-Evaluations, Mobile HCI, 2012
  • Benjamin Weiss and Felix Burkhardt: Is 'not bad' good enough? Aspects of unknown voices' likability, Interspeech, 2012, PDF
  • Björn Schuller, Stefan Steidl, Anton Batliner, Elmar Nöth, Alessandro Vinciarelli, Felix Burkhardt, Rob van Son, Felix Weninger, Florian Eyben, Tobias Bocklet, Gelareh Mohammadi, Benjamin Weiss: The INTERSPEECH 2012 Speaker Trait Challenge, Interspeech, 2012, PDF
  • F. Burkhardt and Jianshen Zhou: "AskWiki": Shallow Semantic Processing to Query Wikipedia, EUSIPCO, 2012, PDF
  • Björn Schuller, Zixing Zhang, Felix Weninger, Felix Burkhardt: Synthesized speech for model training in cross-corpus recognition of human emotion, International Journal for Speech Technology, 2012, PDF
  • F. Burkhardt: "You Seem Aggressive!" Monitoring Anger in a Practical Application, LREC, 2012, PDF
  • F. Burkhardt: Fast Labeling and Transcription with the Speechalyzer Toolkit, LREC, 2012, PDF
  • J. A.Gulla, J. Liu, F. Burkhardt, J. Zhou, C. Weiss, P. Myrseth, V. Haderlein, and O. Cerrato: Semantics and Search. Accepted in Sugumaran & Gulla (Eds.), Applied Semantic Web Technologies. Taylor & Francis, 2011.
  • M. Schröder, P. Baggia, F. Burkhardt, C. Pelachaud, C. Peter, and E. Zovato: EmotionML - an upcoming standard for representing emotions and related states, ACII Affective Computing and Intelligent Interaction, 2011
  • F. Burkhardt, B. Schuller, B. Weiss, F. Weninger: "Would You Buy A Car From Me?" - On the Likability of Telephone Voices, Interspeech, 2011, PDF
  • F. Burkhardt: An Affective Spoken Story Teller Interspeech, 2011, PDF
  • F. Burkhardt, Speechalyzer: a Software Tool to Process Speech Data, ESSV, Elektronische Sprachsignalverarbeitung, 2011, PDF
  • M. Schröder, H. Pirker, M. Lamolle, F. Burkhardt, C. Peter and E. Zovato, Representing emotions and related states in technological systems. In P. Petta, R. Cowie, & C. Pelachaud (Eds.), Emotion-Oriented Systems - The Humaine Handbook (pp. 367-386). Springer, 2011
  • M. Schröder, P.Baggia, F. Burkhardt, C. Pelachaud, C. Peter, E. Zovato, Emotion Markup Language (EmotionML) 1.0 W3C Working Draft, 2011
  • M. Schröder, F. Burkhardt and S. Krstulovic, Synthesis of emotional speech. In Scherer, K. R., Bänziger, T. and Roesch, E. (Eds.). Blueprint for Affective Computing, pp. 222-231. Oxford, UK: Oxford University Press, 2010
  • B. Weiss and F. Burkhardt, Voice Attributes Affecting Likability Perception, Proc. Interspeech 2010, PDF
  • B. Schuller, S. Steidl, A. Batliner, F. Burkhardt, L. Devillers, C. Müller, S. Narayanan, The INTERSPEECH 2010 Paralinguistic Challenge, Proc. Interspeech 2010, PDF
  • M. Feld, C. Müller and F. Burkhardt, Automatic Speaker Age and Gender Recognition in the Car for Tailoring Dialog and Mobile Services, Proc. Interspeech 2010, PDF
  • F. Burkhardt, M. Eckert, J. Niemann, F. Oberle, T. Scheerbarth, S. Seide and J. Zhou: A Mobile Office And Entertainment System Based On Android, Proc. ESSV 2010, PDF
  • F. Burkhardt, Eckert, M., Johannsen, W. and J. Stegmann: A Database of Age and Gender Annotated Telephone Speech , Proc. LREC 2010, PDF
  • B. Schuller and F. Burkhardt: Learning with Synthesized Speech for Automatic emotion recognition. ICASSP 2010, PDF
  • K.-P. Engelbrecht, F. Burkhardt and S. Möller: Prediction of Turn-wise User Judgments from Acoustic Features of User Utterances. First International Workshop on Spoken Dialog Systems Technology (IWSDS) 2009, PDF.
  • F. Burkhardt and J. Stegmann: Emotional Speech Synthesis: Applications, History and Possible Future ESSV 2009, PDF.
  • F. Burkhardt, K.P. Engelbrecht, M. van Ballegooy, T. Polzehl and J. Stegmann: Emotion Detection in Dialog Systems - Usecases, Strategies and Challenges. ACII 2009, PDF.
  • F. Burkhardt: Rule-Based Voice Quality Variation with Formant Synthesis, Interspeech 2009, PDF
  • F. Burkhardt, T. Polzehl, J. Stegmann, F. Metze and R. Huber: Detecting Real Life Anger, ICASSP 2009, PDF
  • C. Weiss, J. A. Gulla, J. Liu, T. Brasethvik, F. Burkhardt, and J. Zhou: Ontology Evolution: A Case Study on Semantic Technology in the Media Domain. In M. M. Cruz-Cunha, E. F. Oliveira, A. J. Tavares, and L. G. Ferreira (Eds.), Handbook of Research on Social Dimensions of Semantic Technologies and Web Services, Chapter 6. IGI Global, 2009.
  • F. Burkhardt, R. Huber, J. Stegmann: Advances in Anger Detection With Real Life Data, ESSV 2008, PDF
  • F. Burkhardt and M. Schröder: Emotion Markup Language: Requirements with Priorities. W3C Incubator Group Report 2008.
  • F. Burkhardt, J. A. Gulla, J. Liu, C. Weiss and J. Zhou: Semi Automatic Ontology Engineering in Business Applications, Workshop Applications of Semantic Technologies, INFORMATIK 2008, PDF
  • F. Metze, Englert, R., Bub. U., F. Burkhardt, B. Kaspar and J. Stegmann. Getting Closer: Tailored Human-Computer Speech Dialog, UAIS journal, special issue on Vocal Interaction: Beyond Traditional Automatic Speech Recognition” (Guest Editors: Susumu Harada, Sri Kurniawan, Adam J Sporka) Issue 8/2, 2008.
  • T. Bocklet, A. Maier, J, Bauer, F. Burkhardt, and E. Nöth: Age and Gender Recognition for Telephone Applications Based on GMM Supervectors and Support Vector Machines. Proc. ICASSP 2008
  • J. Ajmera and F. Burkhardt: Age and Gender Classification using Modulation Cepstrum. Proc. Odyssey 2008: The Speaker and Language Recognition Workshop, 2008.
  • F. Burkhardt, F. Metze, and J. Stegmann. Speaker Classification for Next Generation Voice Dialog Systems. In Advances in Digital Speech Transmission, edited by M. Rainer, U. Heute, C. Antweiler. Wiley, 2007.
  • F. Burkhardt, R. Huber and A. Batliner: Application of Speaker Classification in Human Machine Dialog Systems, in Speaker Classification I: Fundamentals, Features, and Methods, edited by C. Müller, pp 174-179, Springer 2007 PDF
  • C. Müller and F. Burkhardt: Combining Short-term Cepstral and Long-term Prosodic Features for Automatic Recognition of Speaker Age. In Proceedings of the Interspeech 2007. Antwerp, Belgium. PDF
  • D. Oberle et al., DOLCE ergo SUMO: On foundational and domain models in SmartWeb Integrated Ontology (SWIntO), Web Semantics: Sci. Services Agents World Wide Web (2007), doi:10.1016/j.websem.2007.06.002
  • F. Metze, J. Ajmera, R. Englert, U. Bub, F. Burkhardt, J. Stegmann, C. Müller, R. Huber, B. Andrassy, J. G. Bauer, B. Littel: Comparison of Four Approaches to Age and Gender Recognition for Telephone Applications, Proc. ICASSP 2007, PDF
  • A. Batliner, F. Burkhardt, M. van Ballegooy, E. Nöth: A Taxonomy of Applications that Utilize Emotional Awareness, Proc. IS-LTC 2006, PDF
  • F. Burkhardt, J. Ajmera, R. Englert, J. Stegmann, W. Burleson: Detecting Anger in Automated Voice Portal Dialogs, Interspeech (ICSLP) 2006, PDF
  • F. Burkhardt, N. Audibert, L. Malatesta, O. Türk, L. Arslan & V. Auberge: Emotional Prosody - Does Culture Make A Difference?, Proc. Speech Prosody 2006, PDF
  • F. Burkhardt, M. van Ballegooy, R. Englert, R. Huber: An Emotion-Aware Voice Portal, ESSP 2005, PDF
  • F. Burkhardt, M. van Ballegooy, J. Stegmann: A Voiceportal Enhanced by Semantic Processing and Affect Awareness GI Jahrestagung (2) 2005, PDF
  • F. Burkhardt, Emofilt: the Simulation of Emotional Speech by Prosody-Transformation, Interspeech 2005, PDF
  • F. Burkhardt, A. Paeschke, M. Rolfes, W. Sendlmeier, B. Weiss: A Database of German Emotional Speech, Interspeech 2005, PDF
  • F. Burkhardt, Simulation emotionaler Sprechweise mit Sprachsyntheseverfahren, Dissertation an der TU-Berlin, Shaker Verlag 2001, PDF
  • F. Burkhardt & W. F. Sendlmeier, Verification of Acoustical Correlates of Emotional Speech using Formant-Synthesis, published twice: in Proc. ISCA Workshop (ITRW) on Speech and Emotion, Belfast 2000, and in Speech and Signals - Aspects of Speech Synthesis, Ed. W.F.Sendlmeier, Hector FFM, PDF
  • F. Burkhardt & W. F. Sendlmeier, Simulation emotionaler Sprechweise mit konkatenierender Sprachsynthese, unpubliziert, Vortrag auf der 30. Jahrestagung der GAL 1999 in FFM, PDF
  • F. Burkhardt & W. F. Sendlmeier, Simulation der Emotion Freude mit Sprachsyntheseverfahren, erschienen auf dem DAGA-Tag der ASA-Tagung 99 in Berlin, PDF
  • F. Burkhardt, Simulation des emotionalen Sprecherzustands 'Freude' mit einem Sprachsyntheseverfahren, Magisterarbeit an der TU-Berlin, 1999, PDF
  • Short summary of master's thesis "happiness in speech" in English, 1999, PDF