Recent work (Mooney, 2007; He and Young, 2006;
Zettlemoyer and Collins, 2005) has developed learning
algorithms for the problem of mapping sentences
to underlying semantic representations. In one such
approach (Zettlemoyer and Collins, 2005) (ZC05),
the input to the learning algorithm is a training set
consisting of sentences paired with lambda-calculus
expressions. For instance, the training data might
contain the following example:
Sentence: list flights to boston
Logical Form: λx.flight(x) ∧ to(x, boston)
In this case the lambda-calculus expression denotes
the set of all flights that land in Boston. In ZC05
it is assumed that training examples do not include
additional information, for example parse tree