This paper describes the system presented at the Interspeech 2009 Emotion Challenge. It relies on both spectral and prosodic features in order to automatically detect the emotional state of the speaker. As both kinds of features have very different characteristics, they are treated separately, creating two sub-classifiers, one using the spectral features and the other one using the prosodic ones. The results of these two classifiers are then combined with a fusion system based on Support Vector Machines.