A computational model of sound stream segregation with the multi-agent paradigm Tomohiro Nakatani, Takeshi Kawabata, and Hiroshi G. Okuno (NTT Basic Research Laboratories) This paper presents a new computation model for sound stream segregation based on a multi-agent paradigm. Sound streams are thought to play a key role in auditory scene analysis, which provides a general framework for auditory research including voiced speech and music. Each agent is dynamically allocated to a sound stream, and it segregates the stream by focusing on consistent attributes. Agents interact with each other to resolve stream interference. In this paper, we design agents to segregate harmonic streams and a noise stream. The presented system can segregate all the streams from a mixture of a male and a female voiced speech and a background non-harmonic noise.