NIPS 2006 -- NLP Tutorial -- 3 [Read-Only]

NIPS 2006 -- NLP Tutorial -- 3 [Read-Only]

-

English
18 Pages
Read
Download
Downloading requires you to have access to the YouScribe library
Learn all about the services we offer

Description

„„„„„„„„„„„„„„„„„„„„NOTEMachine Learning for NLP:New Developments and ChallengesThese slides are still incompleteA more complete version will be posted at a later date at:http://www.cs.berkeley.edu/~klein/nips-tutorialDan KleinComputer Science DivisionUniversity of California at BerkeleyWhat is NLP? Speech SystemsAutomatic Speech Recognition (ASR)Audio in, text outSOTA: 0.3% for digit strings, 5% dictation, 50%+ TVFundamental goal: deep understand of broad language “Speech Lab”End systems that we want to build:Ambitious: speech recognition, machine translation, information Text to Speech (TTS)extraction, dialog interfaces, question answering… Text in, audio outModest: spelling correction, text categorization… SOTA: totally intelligible (if sometimes unnatural)Sometimes we’re also doing computational linguisticsMachine Translation Information ExtractionInformation Extraction (IE)Unstructured text to database entriesNew York Times Co. named Russell T. Lewis, 45, president and general manager of its flagship New York Times newspaper, responsible for all business-side activities. He was executive vice president and deputy general manager. He succeeds Lance R. Primis, who in September was named president and chief operating officer of the parent. Person Company Post StateRussell T. Lewis New York Times president and general startnewspaper managerRussell T. Lewis New York Times executive vice president endnewspaperLance R. ...

Subjects

Informations

Published by
Reads 20
Language English
Report a problem





















NOTE
Machine Learning for NLP:
New Developments and Challenges
These slides are still incomplete
A more complete version will be posted at a later
date at:
http://www.cs.berkeley.edu/~klein/nips-tutorial
Dan Klein
Computer Science Division
University of California at Berkeley
What is NLP? Speech Systems
Automatic Speech Recognition (ASR)
Audio in, text out
SOTA: 0.3% for digit strings, 5% dictation, 50%+ TV
Fundamental goal: deep understand of broad language “Speech Lab”
End systems that we want to build:
Ambitious: speech recognition, machine translation, information Text to Speech (TTS)
extraction, dialog interfaces, question answering… Text in, audio out
Modest: spelling correction, text categorization… SOTA: totally intelligible (if sometimes unnatural)
Sometimes we’re also doing computational linguistics
Machine Translation Information Extraction
Information Extraction (IE)
Unstructured text to database entries
New York Times Co. named Russell T. Lewis, 45, president and general
manager of its flagship New York Times newspaper, responsible for all
business-side activities. He was executive vice president and deputy
general manager. He succeeds Lance R. Primis, who in September was
named president and chief operating officer of the parent.
Person Company Post State
Russell T. Lewis New York Times president and general start
newspaper manager
Russell T. Lewis New York Times executive vice president end
newspaper
Lance R. Primis New York Times Co. president and CEO start
Translation systems encode:
Something about fluent language
Something about how two languages correspond
SOTA: perhaps 70% accuracy for multi-sentence temples, 90%+ SOTA: for easy language pairs, better than nothing, but more an understanding aid than a
replacement for human translators for single easy fields
1„



























Question Answering Goals of this Tutorial
Question Answering:
More than search Introduce some of the core NLP tasksAsk general
comprehension
questions of a
document collection Present the basic statistical modelsCan be really easy:
“What’s the capital of
Wyoming?”
Can be harder: “How Highlight recent advances
many US states’
capitals are also their
largest cities?”
Highlight recurring constraints on use of ML Can be open ended:
“What are the main techniquesissues in the global
warming debate?”
Highlight ways this audience could really help outSOTA: Can do factoids,
even when text isn’t a
perfect match
Recurring Issues in NLP Models Outline
Inference on the training set is slow enough that discriminative
methods can be prohibitive Language Modeling
Need to scale to millions of features
Indeed, we tend to have more features than data points, and it all works
out ok Syntactic / Semantic Parsing
Kernelization is almost always too expensive, so everything’s done
with primal methods Machine Translation
Need to gracefully handle unseen configurations and words at test
time
Information Extraction
Severe non-stationarity when systems are deployed in practice
Unsupervised LearningPipelined systems, so we need relatively calibrated probabilities,
also errors often cascade
Speech in a Slide The Noisy-Channel Model
Frequency gives pitch; amplitude gives volume We want to predict a sentence given acoustics:
s p ee ch l a b
The noisy channel approach:
Frequencies at each time slice processed into observation vectors
Acoustic model: HMMs over Language model:
word positions with mixtures Distributions over sequences
of Gaussians as emissions of words (sentences)
……………………………………………..a a a a a ………..12 13 12 14 14
2
amplitude
e
f
r
e
q
u
n
c
y„









































Language Models Language Model Samples
Unigram:In general, we want o place a distribution over sentences
[fifth, an, of, futures, the, an, incorporated, a, a, the, inflation, most, dollars, quarter]
Classic solution: n-gram models [that, or, limited, the]
[]
[after, any, on, consistently, hospital, lake, of, of, other, and, factors, raised, analyst,
too, allowed, mexico, never, consider, fall, bungled, davison, that, obtain, price, lines,
the, to, sass, the, the, further, board, a, details, machinists, …… , nasdaq]
Bigram:N-gram models are (weighted) regular languages [outside, new, car, parking, lot, of, the, agreement, reached]
[although, common, shares, rose, forty, six, point, four, hundred, dollars, from, thirty,
seconds, at, the, greatest, play, disingenuous, to, be, reset, annually, the, buy, out, of,
american, brands, vying, for, mr., womack, currently, share, data, incorporated, believe, Natural language is not regular
chemical, prices, undoubtedly, will, be, as, much, is, scheduled, to, conscientious, teaching]
Many linguistic arguments [this, would, be, a, record, november]
Long-distance effects:
PCFG (later):“The computer which I had just put into the machine room on the
[This, quarter, ‘s, surprisingly, independent, attack, paid, off, the, risk, involving, IRS, leaders, fifth floor crashed.” and, transportation, prices, .]
[It, could, be, announced, sometime, .]
[Mr., Toseland, believes, the, average, defense, economy, is, drafted, from, slightly, more, N-gram models often work well anyway (esp. with large n) than, 12, stocks, .]
Smoothing Interpolation / Dirichlet Priors
Dealing with sparsity well: smoothing / shrinkage
Problem: is supported by few counts
For most histories P(w | h), relatively few observations
Solution: share counts with related histories, e.g.:
Very intricately explored for the speech n-gram case
Easy to do badly
Despite classic mixture formulation, can be viewed as a
P(w | denied the)
3 allegations hierarchical Dirichlet prior [MacKay and Peto, 94]
2 reports
1
1 claims Each level’s distribution drawn from prior centered on back-off
0.81 request …
0.67 total Strength of prior related to mixing weights
Unigr ams
0.4 Bigrams
P(w | denied the) 0.2 Rules
2.5 allegations
0 Problem: this kind of smoothing doesn’t work well empirically1.5 reports 0 200000 400000 600000 800000 1000000
0.5 claims
Num be r of Wor ds0.5 request …2 other
All the details you could ever want: [Chen and Goodman, 98]7 total
Kneser-Ney: Discounting Kneser-Ney: Details
N-grams occur more in training than they will later: Kneser-Ney smoothing combines several ideas
Absolute discounting
Count in 22M Words Avg in Next 22M Good-Turing c*
1 0.448 0.446
2 1.25 1.26
3 2.24 2.24 Lower order models take a special form
4 3.23 3.24
Absolute Discounting
Save ourselves some time and just subtract 0.75 (or some d) KN smoothing repeatedly proven effective
Maybe have a separate value of d for very low counts But we’ve never been quite sure why
And therefore never known how to make it better
[Teh, 2006] shows KN smoothing is a kind of approximate
inference in a hierarchical Pitman-Yor process (and better
approximations are superior to basic KN)
3
all allegat ions allegations
reports reports
claims claims
request request
attack
attack
man
man
outcome outcome
Fraction Seen„








































Data >> Method? Beyond N-Gram LMs
Lots of ideas we won’t have time to discuss:Having more data is always better…
Caching models: recent words more likely to appear again
10 Trigger models: recent words trigger other words
9.5 100,000 Katz Topic models
9 100,000 KN
8.5 1,000,000 Katz A few recent ideas I’d like to highlight
8 1,000,000 KN
7.5 10,000,000 Katz
Syntactic models: use tree models to capture long-distance
7 10,000,000 KN syntactic effects [Chelba and Jelinek, 98]
6.5 all Katz
6 all KN
Discriminative models: set n-gram weights to improve final task
5.5 accuracy rather than fit training set density [Roark, 05, for ASR;
1234567891020 Liang et. al., 06, for MT]
n-gram order
Structural zeros: some n-grams are syntactically forbidden, keep … but so is using a better model estimates at zero [Mohri and Roark, 06]
Another issue: N > 3 has huge costs in speech recognizers
Outline Phrase Structure Parsing
Phrase structure parsing
Language Modeling organizes syntax into
constituents or brackets
In general, this involves Syntactic / Semantic Parsing
nested trees
Linguists can, and do,
Machine Translation argue about what the
S
gold structures should be
VP
Lots of ambiguityInformation Extraction
NP PP
NPNot the only kind of
N’ NPsyntax…Unsupervised Learning new art critics write reviews with computers
Syntactic Ambiguities Probabilistic Context-Free Grammars
A context-free grammar is a tuple <N, T, S, R>Prepositional phrases:
They cooked the beans in the pot on the stove with handles. N : the set of non-terminals
Phrasal categories: S, NP, VP, ADJP, etc.
Particle vs. preposition: Parts-of-speech (pre-terminals): NN, JJ, DT, VB
The puppy tore up the staircase. T : the set of terminals (the words)
S : the start symbol
Complement structures Often written as ROOT or TOPThe tourists objected to the guide that they couldn’t hear.
Not usually the sentence non-terminal S
R : the set of rulesGerund vs. participial adjective
Visiting relatives can be boring. Of the form X → Y Y …Y, with X, Y ∈ N1 2 k i
Examples: S → NP VP, VP → VP CC VP
Many more ambiguities Also called rewrites, productions, or local trees
A PCFG adds:
Note that most incorrect parses are structures which are permitted
A top-down production probability per rule P(Y Y …Y | X)by the grammar but not salient to a human listener like the examples 1 2 k
above
4
Entropy„
ƒ


ƒ





ƒ









ƒ
Treebank Grammar Scale Treebank Parsing
Typically get grammars (and parameters) from a treebank of parsed Treebank grammars can be enormous
sentencesAs FSAs, the raw grammar has ~10K states, excluding the lexicon
Better parsers usually make the grammars larger, not smaller.
NP
ADJ ROOT → S
NOUNDET
S → NP VP .DET NOUN
NP → PRPPLURAL NOUN
VP → VBD ADJ
NP PP
NP NP
CONJ
PCFGs and Independence Non-Independence
Symbols in a PCFG imply conditional independence: Independence assumptions are often too strong.
S All NPs NPs under S NPs under VP
23%S → NP VP NP 21%
NP VP
NP → DT NN
11%
9% 9% 9%
7%6%
4%At any node, the productions inside that node are independent of
the material outside that node, given the label of that node.
NP PP DT NN PRP NP PP DT NN PRP NP PP DT NN PRPAny information that statistically connects behavior inside and
outside a node must be encoded into that node’s label.
Example: the expansion of an NP is highly dependent on
the parent of the NP (i.e., subjects vs. objects).
Also: the subject and object expansions are correlated
The Game of Designing a Grammar Manual Annotation
Manually split categories
Examples:
NP: subject vs object
DT: determiners vs demonstratives
IN: sentential vs prepositional
Fairly compact grammar
Linguistic motivations
Symbol refinement can improve fit of the grammar
Parent annotation [Johnson ’98]
Head lexicalization [Collins ’99, Charniak ’00] Model F1
Automatic clustering [Matsuzaki 05, Petrov et. al. 06]
Naïve Treebank Grammar 72.6
Klein & Manning ’03 86.3
5„

ƒ




ƒ
ƒ
ƒ

ƒ
Automatic Annotation Induction Learning Latent Annotations
Advantages: ForwardEM algorithm:
Automatically learned: Brackets are known
Label all nodes with latent variables.
Base categories are known XSame number k of subcategories 1
Only induce subcategoriesfor all categories.
Disadvantages: XX X 72 4
Grammar gets too large
Most categories are X X X3 5 6
oversplit while others
.are undersplit.
Model F1 He was right
Klein & Manning ’03 86.3[Matsuzaki et. al ’05, Just like Forward-Backward for HMMs.
BackwardPrescher ’05] Matsuzaki et al. ’05 86.7
Hierarchical Split / Merge Number of Phrasal Subcategories
40 NP
VP
35
30
PP
25
20

15
10
5
0Model F1
Matsuzaki et al. ’05 86.7
Petrov et. al. 06 90.2
Linguistic Candy Linguistic Candy
Proper Nouns (NNP): Relative adverbs (RBR):
NNP-14 Oct. Nov. Sept.
RBR-0 further lower higher
NNP-12 John Robert James
RBR-1 more less More
NNP-2 J. E. L.
RBR-2 earlier Earlier later
NNP-1 Bush Noriega Peters
Cardinal Numbers (CD):NNP-15 New San Wall
CD-7 one two ThreeNNP-3 York Francisco Street
CD-4 1989 1990 1988Personal pronouns (PRP):
CD-11 million billion trillion
PRP-0 It He I
CD-0 1 50 100
PRP-1 it he they
CD-3 1 30 31
PRP-2 it them him
CD-9 78 58 34
6
NP
VP
PP
ADVP
S
ADJP
SBAR
QP
WHNP
PRN
NX
SINV
PRT
WHPP
SQ
CONJP
FRAG
NAC
UCP
WHADVP
INTJ
SBARQ
RRC
WHADJP
X
ROOT
LST„





















Dependency ParsingDependency Parsing
Pure dependency parsing is only cubic [Eisner 99]
Lexicalized parsers can be seen as producing dependency trees
X[h] h
Y[h] Z[h’] hh’questioned
lawyer witness
i h k h’ j h k h’
the the
Some work on non-projective dependencies
Common in, e.g. Czech parsing
Can do with MST algorithms [McDonald and Pereira, 05]
Each local binary tree corresponds to an attachment in the dependency graph
Parse Reranking Tree Insertion Grammars
Assume the number of parses is very small Rewrite large (possibly lexicalized) subtrees in a single step [Bod 95]
We can represent each parse T as an arbitrary feature vector ϕ(T)
Typically, all local rules are features
Also non-local features, like how right-branching the overall tree is
[Charniak and Johnson 05] gives a rich set of features
Can use most any ML techniques
Current best parsers use reranking
Derivational ambiguity whether subtrees were generated atomically or
compositionally
Most probable parse is NP-complete
Common problem: ML estimates put all mass on large rules, and simple
priors don’t adequately fix the problem
Non-CF Phenomena Semantic Role Labeling (SRL)
Want to know more than which NP is a verb’s subject:
Typical pipeline:
Parse then label roles
Almost all errors in parsing
Really, SRL is quite a lot easier than parsing
7„



















SRL Example Propbank / FrameNet
FrameNet: roles shared between verbs
PropBank: each verb has its own roles
PropBank more used, because it’s layered over the treebank (and so has
greater coverage, plus parses)
Note: some linguistic theories postulate even fewer roles than FrameNet (e.g. 5-
20 total: agent, patient, instrument, etc.)
Outline Machine Translation: Examples
Language Modeling
Syntactic / Semantic Parsing
Machine Translation
Information Extraction
Unsupervised Learning
Levels of Transfer General Approaches
Rule-based approaches
(Vauquois Interlingua Expert system-like rewrite systems
Semantic Semantic triangle)
Deep transfer methods (analyze and generate)Composition Decomposition
Lexicons come from humans
Semantic Semantic Can be very fast, and can accumulate a lot of knowledge over time (e.g.
Structure Structure Systran)Semantic SemanticSemantic
Analysis GenerationTransfer
Statistical approaches
Syntactic Syntactic
Word-to-word translationStructure StructureSyntacticSyntactic Syntactic Phrase-based translationTransferAnalysis Generation Syntax-based translation (tree-to-tree, tree-to-string)
Word Word Trained on parallel corpora
Structure StructureDirect Usually noisy-channel (at least in spirit)
Morphological Morphological
Analysis Generation
Target TextSource Text
8„










The Coding View MT System Components
Language Model Translation Model
“One naturally wonders if the problem of translation could
conceivably be treated as a problem in cryptography. When I channelsource
look at an article in Russian, I say: ‘This is really written in e fP(f|e)P(e)English, but it has been coded in some strange symbols. I will
now proceed to decode.’ ”
Warren Weaver (1955:18, quoting a letter he wrote in 1947) observed best
decoderef
argmax P(e|f) = argmax P(f|e)P(e)
ee
Finds an English translation which is both fluent
and semantically faithful to the foreign source
Pipeline of an MT System Word Alignment
En
Data processing vertu xz
de
les
What nouvelles
What is the anticipated is Sentence alignment propositions
thecost of collecting fees ,
anticipated quel under the new proposal?
cost est
ofWord alignment le
collecting coût En vertu des nouvelles
fees prévu propositions, quel est le under de
the Transfer rule extraction coût prévu de perception perception
new des droits? de
proposal
les
?
droitsDecoding
?
Unsupervised Word Alignment IBM Model 1 [Brown et al, 93]
Alignments: a hidden vector called an alignment specifies which English source Input: a bitext: pairs of translated sentences
is responsible for each French target word.
nous acceptons votre opinion .
we accept your view .
Output: alignments: pairs of
translated words
When words have unique
sources, can represent as
a (forward) alignment
function a from French to
English positions
9„


Examples: Translation and Fertility Example Errors
Fertility example Decoding
In these word-to-word models
Finding best alignments is easy
Finding translations (decoding) is hard
IBM Decoding as a TSP Phrase Movement
[Germann et al, 01]
On Tuesday Nov. 4, earthquakes rocked Japan once again
Des tremblements de terre ont à nouveau touché le Japon jeudi 4 novembre.
10