30 December 2014

Natural Language Processing

Objectives:
• To acquire basic understanding of linguistic concepts and natural language complexity, variability.
• To acquire basic understanding of machine learning techniques as applied to language.
• To implement N-grams Models.

UNIT I
Introduction and Overview What is Natural Language Processing, hands-on demonstrations. Ambiguity and uncertainty in language. The Turing test. Regular Expressions Chomsky hierarchy, regular languages, and their limitations. Finite-state automata. Practical regular expressions for finding and counting language phenomena. A little morphology. Exploring a large corpus with regex tools. Programming in Python An introduction to programming in Python. Variables, numbers, strings, arrays, dictionaries, conditionals, iteration. The NLTK (Natural Language Toolkit) String Edit Distance and Alignment Key algorithmic tool: dynamic programming, a simple example, use in optimal alignment of sequences. String edit operations, edit distance, and examples of use in spelling correction, and machine translation.

UNIT II
Context Free Grammars Constituency, CFG definition, use and limitations. Chomsky Normal Form. Top-down parsing, bottom-up parsing, and the problems with each. The desirability of combining evidence from both directions Non-probabilistic Parsing Efficient CFG parsing with CYK, another dynamic programming algorithms. Early parser. Designing a little grammar, and parsing with it on some test data. Probability Introduction to probability theory Joint and conditional probability, marginals, independence, Bayes rule, combining evidence. Examples of applications in natural language. Information Theory The "Shannon game"--motivated by language! Entropy, crossentropy,information gain. Its application to some language phenomena.

UNIT III
Language modeling and Naive Bayes Probabilistic language modeling and its applications. Markov models. N-grams. Estimating the probability of a word, and smoothing Generative models of language. Part of Speech Tagging and Hidden Markov Models, Viterbi Algorithm for Finding Most Likely HMM Path Dynamic programming with Hidden Markov Models, and its use for part-of-speech tagging, Chinese word segmentation, prosody, information extraction, etc.

UNIT IV
Probabilistic Context Free Grammars Weighted context free grammars. Weighted CYK. Pruning and beam search. Parsing with PCFGs A tree bank and what it takes to create one. The probabilistic version of CYK. Also: How do humans parse? Experiments with eye-tracking. Modern parsers. Maximum Entropy Classifiers The maximum entropy principle and its relation to maximum likelihood. Maximum entropy classifiers and their application to document classification, sentence segmentation, and other language tasks.

UNIT V
Maximum Entropy Markov Models & Conditional Random Fields Part-of-speech tagging, noun-phrase segmentation and information extraction models that combine maximum entropy and finite-state machines. State-of-the-art models for NLP. Lexical Semantics Mathematics of Multinomial and Dirichlet distributions, Dirichlet as a smoothing for multinomial’s.Information Extraction & Reference Resolution- Various methods, including HMMs. Models of anaphora resolution. Machine learning methods for co reference.

TEXT BOOKS:
1. "Speech and Language Processing" by Jurafsky and Martin, Prentice Hall
2. "Statistical Natural Language Processing" by Manning and Schutze, MIT Press
3. "Natural Language Understanding" by James Allen, The Benajmins/Cummings Publishing Company

REFERENCES BOOKS:
1. Elements of Information Theory by Thomas M. Cover and Joy A. Thomas. Wiley.
2. Statistical Language Learning by Eugene Charniak, The MIT Press.
3. Statistical Methods for Speech Recognition by Frederick Jelinek, The MIT Press.
4. "Learning Python" by Lutz and Ascher, O'Reilly

.

0 comments:

Post a Comment

Thanks for that comment!