The need and importance of automatically recognizing emotions from human speech has grown with the increasing role of human-computer interaction applications. This paper explores the detection of domain-specific emotions using a fuzzy inference system to detect two emotion categories, negative and nonnegative emotions. The input features are a combination of segmental and suprasegmental acoustic information; feature sets are selected from a 21-dimensional feature set and applied to the fuzzy classifier. Our fuzzy inference system is designed through a data-driven approach. The design of the fuzzy inference system has two phases: one for initialization for which fuzzy c-means method is used, and the other is fine-tuning of parameters of the fuzzy model. For fine-tuning, a well known neuro-fuzzy method are used. Results from on spoken dialog data from a call center application show that the optimized FIS with two rules (FIS-2) improves emotion classification by 63.0% for male data and 73.7% for female over previous results obtained using linear discriminant classifier.