My research interests fall broadly into three categories: natural language understanding, knowledge representation and reasoning, and human computer interaction. In each of the following sections, I give a brief outline of my work in the area, and provide links to project pages. Those project pages include some relevant references for further reading.
Natural Language Understanding
A formal representation of the knowledge within text, backed by ontologies and further background knowledge, is very valuable. Designing systems to automatically build these representations is obviously very difficult. Over the past few years I have contributed to the Tractor project which aims to understand the contents of short intelligence messages as part of a larger hard/soft information fusion system.
I am currently working on a new NLU system within the domain of biomedicine with Dr. Peter Elkin. While there is not yet much to write publicly about this systen, there are some initial related publications listed on the Biomedical Informatics NLU page.
Knowledge Representation and Reasoning
It is my belief that a highly expressive logic is necessary for representing the knowledge from text. An implemented system must have ways to reason over knowledge in that logic to form a complete understanding, and to be able to make use of that knowledge. As part of my dissertation work I implemented CSNePS based on the SNePS 3 specification, and designed/implemented Inference Graphs within that system. Inference Graphs are capable of natural deduction and susbumption inference.
Human Computer Interaction
My interests in HCI mostly revolve around how people do tasks which are very technical with computers. My most recent work in this area has had to do with interaction with knowledgebases and ontologies. Some of my HCI work has involved using multimodal interfaces (sketch and speech in TabconV2), and modalities outside of the usual keyboard and mouse, such as computer vision, in the AirTouch project.