A natural part of human conversation is to adapt what we say and how we say it depending on our conversational partners and the dialogue context. This includes syntactic, semantic and lexical variation. For machines to be perceived as natural conversational partners the system output needs to be coherent with the current state of the dialogue. The research area of Natural Language Generation (NLG) has mainly been concerned with the generation of coherent text and monologues rather than the generation of utterances in dialogue. Generation in dialogue systems has been regarded as fairly non-problematic and simple, non-flexible generation methods such as “canned” speech or templates have been used. These methods work well for task-oriented systems operating within limited domains. However, to produce the kind of spontaneous speech which characterizes human-human interaction the system needs to master a number of different capabilities. This includes determining if the system has elicited adequate information from the user, contextual understanding, responsive interaction and human-like grounding behaviour. To generate this latter type of utterances in dialogue systems we need to know when and how these are used in conversation between humans. I suggest a method which may teach us how to control these parameters. By synthesizing and replacing one of the parties in a recording of a human-human dialogue, I have simulated a dialogue system which behaves very much like a human would. In my talk I will discuss issues related to human-like utterance generation and present results from an experiment where a system with human-like behaviour was simulated. The results support that the system version based on a human speaker was perceived as more human-like, polite and intelligent compared to a system version with more traditional behaviour.