In recent years computer sciences saw great progress in data-processing techniques. This was crucial for the success of statistical methods in a wide range of sciences. In computational linguistics statistics open new ways to deal with language. It is now much easier to examine the actual use of language by processing large amounts of language in use [for example corpora or the World Wide Web]. By using statistical models to draw conclusions about the use of language so-called language models can be built up. These can be seen as an attempt to model the use of language. In this function they are able to predict certain uses of language. One aspect of language that we want to model, perhaps the most difficult one to model, is its meaning. Today, there is not yet a satisfying model for all aspects of the meaning of natural language. Not even at the word level. Statistical methods now showed to be a useful tool for creating alternative ways to model the meaning of words.
Looking at the use of language we recognize that the context of a word is in part dependent on the meaning of the word. Consider the word dog. If we browse the Internet for this word, then we would probably encounter the word bark more often in the immediate context of the word dog than we find the word calculate. This means that the context of the word somehow encodes [or gives hints] to the meaning of the word. Distributional semantics now exploits this fact together with statistical methods. In order to get the meaning of a word x we browse through large amounts of data and build up a table containing how often we find certain words in the context of word x. This table can then be interpreted as a vector as you know it from school. We then have a bunch of vectors each representing the meaning of some word. The big advantage now is that, as you might recall from school, we can easily compare vectors to each other. For example we can say that the distance between two vectors is 5. This distance can now be used to measure semantic similarity. The meaning of the word cat seems to be more similar to the meaning of the word dog than to the meaning of the word spaghetti.
In computational linguistics there is a wide range of applications of distributional semantics, for example it is used in Google. Nevertheless, linguists in particular seem, perhaps rightly, not to trust this approach. In the talk we want to understand the mechanism better and consider the advantages and disadvantages of this approach to model the meaning of words.