An important part of textual inference is making deductions involving monotonicity, that is, determining whether a given assertion entails restrictions or relaxations of that assertion. For instance, the statement `We *know* the epidemic spread quickly' does not entail `We know the epidemic spread quickly via fleas', but `We *doubt* the epidemic spread quickly' entails `We doubt the epidemic spread quickly via fleas'. Here, we present the first algorithm for the challenging lexical-semantics problem of learning linguistic constructions that, like `doubt', are downward entailing (DE). Our algorithm is unsupervised, resource-lean, and effective, accurately recovering many DE operators that are missing from the hand-constructed lists that textual-inference systems currently use.
@inproceedings{Danescu-Niculescu-Mizil+Lee+Ducott:09a, author = {Cristian Danescu-Niculescu-Mizil and Lillian Lee and Richard Ducott}, title = {Without a `doubt'? Unsupervised discovery of downward-entailing operators}, year = {2009}, pages = {137--145}, booktitle = {Proceedings of NAACL HLT} }
This paper is based upon work supported in part by DHS grant N0014-07-1-0152, National Science Foundation grant No. BCS-0537606, a Yahoo! Research Alliance gift, a CU Provost’s Award for Distinguished Scholarship, and a CU Institute for the Social Sciences Faculty Fellowship. Any opinions, findings, and conclusions or recommendations expressed are those of the authors and do not necessarily reflect the views or official policies, either expressed or implied, of any sponsoring institutions, the U.S. government, or any other entity.