next up previous contents
Next: Applying probabilities to Data-Intensive Up: Probability and Language Models Previous: Results

Summary

This chapter has introduced the basics of probability and statistical language modelling:

In chapter 9 we add basic information theory to the repertoire which we have already developed. We will show the application of this tool to a word-clustering problem Then in chapter [*] we bring back the n-gram models introduced in the current chapter, combining them with information theoretic ideas to explain the training algorithm which makes it possible for part-of-speech taggers and speech recognisers to work as well as they do.


next up previous contents
Next: Applying probabilities to Data-Intensive Up: Probability and Language Models Previous: Results
Chris Brew
8/7/1998