Skip to content

Commit 0b5aa63

Browse files
Noah GoodmanNoah Goodman
authored andcommitted
add m implicature exercise
1 parent dd98bd2 commit 0b5aa63

File tree

1 file changed

+107
-0
lines changed

1 file changed

+107
-0
lines changed
Lines changed: 107 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,107 @@
1+
---
2+
layout: exercise
3+
title: Manner implicature and friends
4+
---
5+
6+
There is regular singing and there is.. not so good singing. There are two was to say the same thing: "Miss X sang the anthem" and "Miss X produced a series of sounds that corresponded closely with the
7+
score of the anthem." Which utterance refers to which singing event? As another example, "He killed the sheriff" conveys murder while "He caused the sheriff to die" conveys some less direct causal process. What about "pink" compared to "light red"?
8+
9+
These examples illustrate a general phenomenon: a *marked* (long, unusual, awkward) utterance is interpreted as conveying a *marked* (unusual, etc) situation. Grice described these instances as arising from violation of his maxim of Manner, hence "manner implicature" or M-implicature. (See Rett, 2020 for a nice exposition.)
10+
11+
How does this play out in formal pragmatics models? We would hope, since the RSA utility formalizes Grice's maxims, that these inferences would arise naturally.
12+
Suppose we have two utterance that mean the same thing, but one is less costly.
13+
Suppose also that we have to objects that could be referred to, but one is more likely (or more salient). Does the cheaper phrase go with the more likely target? We implement this in standard RSA:
14+
15+
~~~
16+
//two objects, one is an priori more likely referent
17+
var objectPrior = function() {
18+
return categorical({vs: ["plain thing", "marked thing"], ps: [0.3, 0.7]})
19+
}
20+
21+
// two words, one is longer
22+
var utterances = ["thing", "tthhiing"]
23+
24+
// utterance cost function
25+
var cost = function(utterance) {
26+
return utterance.length
27+
}
28+
29+
// "thing" and "tthhiing" both apply to both objects
30+
var meaning = function(utterance, obj){
31+
return true
32+
}
33+
34+
// literal listener
35+
var literalListener = function(utterance){
36+
Infer(function(){
37+
var obj = objectPrior();
38+
condition(meaning(utterance, obj))
39+
return obj
40+
})
41+
}
42+
43+
// set speaker optimality
44+
var alpha = 1
45+
46+
// pragmatic speaker
47+
var speaker = function(obj){
48+
Infer(function(){
49+
var utterance = uniformDraw(utterances)
50+
factor(alpha * (literalListener(utterance).score(obj) - cost(utterance)))
51+
return utterance
52+
})
53+
}
54+
55+
// pragmatic listener
56+
var pragmaticListener = function(utterance){
57+
Infer(function(){
58+
var obj = objectPrior()
59+
observe(speaker(obj),utterance)
60+
return obj
61+
})
62+
}
63+
64+
viz.table(pragmaticListener("thing"))
65+
viz.table(pragmaticListener("tthhiing"))
66+
~~~
67+
68+
Standard RSA predicts that objects interpretations match prior probabilities for both words. How can we break the symmetry in interpretation, without building it in *a priori* in the meanings? One method, *lexical uncertainty* (Bergen, Levy, Goodman), posits that each word might have a more specific meaning -- applying to only one object -- but the listener doesn't know which. When a pragmatic listener isn't sure how a speaker uses words they have to infer this jointly with the intended object. Implement this lexical uncertainty idea:
69+
70+
~~~
71+
72+
~~~
73+
74+
Does the M-implicature now arise? How much does this depend on the possible meanings your listener entertains? Does the M-implicature still arise if the literal listener infers the meanings (instead of the pragmatic listener)?
75+
76+
More generally, we can introduce free variables into the meaning function that are to be filled in based on context.
77+
Lifting these variables from the literal listener to the pragmatic listener yields a variety of interesting effects.
78+
79+
### Direct or indirect causation
80+
81+
When you hear "John caused the vase to break" you probably imagine an atypical or more complex situation than when you hear "John broke the vase". This could indicate that the lexical semantics of "break" is subtly different than "cause to break". An alternative hypothesis is that these are the same, but an M-implicature arises due to their different lengths.
82+
83+
Let's formalize this with a world model in which there is an immediate causal chain that leads from John to the broken vase and also a chain that has an intermediate event. Either: John bumped the vase, it fell over and broke; or, John bumped a lamp, which bumped into the vase, so it fell over and broke. Because there is a longer causal path, the latter will be less likely.
84+
85+
~~~
86+
var JBV = flip(0.2)
87+
var JBL = flip(0.2)
88+
var LBV = JBL ? flip(0.8) : false
89+
var VB = (JBV || LBV) ? flip(0.8) : false
90+
~~~
91+
92+
For simplicity, assume that both utterances could refer to either *john bumped the vase and the vase broke* (`JBV && VB`) or *john bumped the lamp and the vase broke* (`JBL && VB`). (Note that this is probably not exactly what *cause* means. Lewis, Gerstenberg, and many others suggest that the meaning involves a counterfactual: that the vase broke but if john hadn't been there it wouldn't have.) Implement these meanings in an RSA model, add lexical uncertainty about whether `JBV` or `JBL` is the intended meaning of "John ...", and verify that the pragmatic listener draws the correct interpretations.
93+
94+
~~~
95+
~~~
96+
97+
98+
### Other M-implicatures
99+
100+
Notice that in the above causation example the lexical uncertainty arose out of ambiguity: it was ambiguous whether "John" in the sentence "John broke the vase" referred to the event of John bumping the vase (`JBV`) or John bumping the lamp (`JBL`). By resolving this ambiguity at the pragmatic listener level we introduced the opportunity for M-implicature.
101+
102+
This analysis suggests that any ambiguity in meaning could in principle give rise to M-implicature. Come up with several sources of ambiguity in language and see whether you think they can lead to M-implicature!
103+
104+
## Vagueness
105+
106+
...TBD
107+

0 commit comments

Comments
 (0)