A model of key finding is presented for single-voiced pieces of tonal music. Each tone is input as a pitch class and a duration. The model makes a parallel search for the key in the scalar and chordal domains, taking into account primacy and memory constraints. The model has been tested for a range of tonal music including the fugue subjects of J. S. Bach's Wohltemperierte Klavier (WTK). The notated key was usually found after a few processing steps and from then on remained stable— but was still sensitive to modulation. The performance of the parallel-processing model was compared with the performance of key-finding models previously proposed by Krumhansl and Schmuckler and by Longuet-Higgins and Steedman. The comparison showed that the new model's most distinctive features, implementation of parallel key search in the scalar and chordal domains, as well as the implementation of search-restricting factors, primacy and memory, make the new model a powerful and plausible alternative to the other models. Subsequently, the parallel-processing model's perceptual plausibility has been tested in two experiments, in which 20 musically well-trained subjects had to produce the key(s) of eight WTK fugue themes (Experiment 1) and to rate the key transparency for seven contrapuntal variations of the A minor subject of J. S. Bach's Kunst der Fuge (Experiment 2). A substantial concordance between listeners' judgments and the key inferences produced by the model was found in both experiments. Conceptual limitations, such as the model's disregard for the potential impact of recency on key finding and for expectations from functional implications of tone order, are discussed. Potential extensions of the model are suggested, as well as ideas for further perceptual studies in which the model might be tested in a more advanced manner than in the present study.