In an information-seeking conversation, a user
converses with an agent to ask a series of ques-
tions that can often be under- or over-specified.
An ideal agent would first identify that they
were in such a situation by searching through
their underlying knowledge source and then ap-
propriately interacting with a user to resolve
it. However, most existing studies either fail
to or artificially incorporate such agent-side
initiatives. In this work, we present INSCIT
(pronounced Insight), a dataset for Information-
Seeking Conversations with mixed-initiative
Interactions. It contains a total of 4.7K user-
agent turns from 805 human-human conversa-
tions where the agent searches over Wikipedia
and either asks for clarification or provides rel-
evant information to address user queries. We
define two subtasks, namely evidence passage
identification and response generation, as well
as a new human evaluation protocol to assess
the model performance. We report results of
two strong baselines based on state-of-the-art
models of conversational knowledge identifi-
cation and open-domain question answering.
Both models significantly underperform hu-
mans and fail to generate coherent and infor-
mative responses, suggesting ample room for
improvement in future studies.