Cocktail-Party Effect with Computational Auditory Scene Analysis - Preliminary Report - Hiroshi G. Okuno, Tomohiro Nakatani, and Takeshi Kawabata (NTT Basic Research Laboratories) One of important and interesting phenomena in sophisticated human communications is the {\it cocktail party effect}: that even at a crowded party, one can attend one conversation and then switch to another one. To model it in a computer implementation, we need a mechanism for understanding general sounds, and Computational Auditory Scene Analysis (CASA) is a novel framework for manipulating sounds. We use it to model the cocktail party effect as follows: sound streams are first extracted from a mixture of sounds, and then some sound stream is selected by focusing attention on it. Because sound stream segregation is an essential primary processing for the cocktail party effect, in this paper, we present a multi-agent approach for sound stream segregation. The resulting system can segregate a man's voice stream, a woman's voice stream, and a noise stream from a mixture of these sounds.