Skip to content

Commit 9f80ffe

Browse files
Noah GoodmanNoah Goodman
authored andcommitted
add meaning excercise
1 parent 587fa78 commit 9f80ffe

File tree

1 file changed

+133
-0
lines changed

1 file changed

+133
-0
lines changed

teaching_extras/RSA-meaning.md

Lines changed: 133 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,133 @@
1+
---
2+
layout: exercise
3+
title: On meaning in RSA
4+
---
5+
6+
At it's core the RSA frameowork describes langauge use in a way that is fairly agnostic to the details of the language semantics. Indeed, RSA only assumes that there is a `meaning` function describing the relationship between utterances and situations (or worlds).
7+
8+
Traditional compositional semantics builds up a meaning function by composing partial meanings along the way. Using composition allows fairly complex meaning functions to be specified compactly. For instance, if we want to allow utterances in which some number of adjectives modify a noun, we don't want to specify meaning separately for ever combination.
9+
For simplicity assume that these are *intersective* adjectives that combine by conjunction.
10+
Look through the meaning function below. How are the meanings of individual words composed? Try adding some additional adjectives!
11+
12+
~~~
13+
// set of states (here: objects of reference)
14+
// we represent objects as JavaScript objects to demarcate them from utterances.
15+
// we will assume that if an object doesn't have a key then this property is false.
16+
// we give each object a string name to make it eaiser to manipulate them.
17+
var objects = [{blue: true, square: true, thing: true, name: "blue square"},
18+
{blue: true, circle: true, thing: true, name: "blue circle"},
19+
{green: true, square: true, thing: true, name: "green square"}]
20+
21+
// prior over world states
22+
var objectPrior = function() {
23+
var obj = uniformDraw(objects)
24+
return obj
25+
}
26+
27+
// set of utterances
28+
var utterances = ["square", "circle", "thing",
29+
"blue thing", "green thing",
30+
"blue square", "green square",
31+
"blue circle"]
32+
33+
// utterance cost function
34+
var cost = function(utterance) {
35+
return 0;
36+
};
37+
38+
// meaning function to interpret the utterances:
39+
var hasProperty = function(obj, word){return _.has(obj, word) ? obj[word] : false}
40+
41+
var meaning = function(utterance, obj){
42+
var utterance = _.isArray(utterance) ? utterance : utterance.split(" ")
43+
var firstWordMeaning = hasProperty(obj, utterance[0])
44+
return utterance.length==1 ? firstWordMeaning : firstWordMeaning && meaning(_.drop(utterance),obj)
45+
}
46+
47+
// literal listener
48+
var literalListener = function(utterance){
49+
Infer(function(){
50+
var obj = objectPrior();
51+
condition(meaning(utterance, obj))
52+
return obj
53+
})
54+
}
55+
56+
// set speaker optimality
57+
var alpha = 1
58+
59+
// pragmatic speaker
60+
var speaker = function(obj){
61+
Infer(function(){
62+
var utterance = uniformDraw(utterances)
63+
factor(alpha * (literalListener(utterance).score(obj) - cost(utterance)))
64+
return utterance
65+
})
66+
}
67+
68+
// pragmatic listener
69+
var pragmaticListener = function(utterance){
70+
Infer(function(){
71+
var obj = objectPrior()
72+
observe(speaker(obj),utterance)
73+
return obj
74+
})
75+
}
76+
77+
viz.table(speaker(objects[0]))
78+
viz.table(pragmaticListener("blue thing"))
79+
~~~
80+
81+
82+
## Better dogs
83+
84+
Psychological studies of categorization have suggested that category membership can be graded -- that is people do not always say either "yes" or "no" when asked if a given object belongs to a category.
85+
Another way of saying this is that some objects are *better* members of the category than others.
86+
How can we incorporate into our semantics the idea that some dogs are better, or more typical, dogs than others?
87+
What does the pragmatics do with this flexibility? Revise the above to have a graded degree for each property (rather than true or false), and revise the meaning function so that it uses this degree to randomly decide if the word is true or false each time. (A single word utterance should be true with probability equal to the graded degree associated with that property for the object in question.)
88+
89+
~~~
90+
~~~
91+
92+
How will the choice of referring expression by the speaker depend on how good the target is as a member of each category? Play around with varying the typicality of the different objects for the different proprties!
93+
94+
95+
### Dog dogs
96+
97+
If you are at the dog park and your friend refers to the "*dog* dog" you look for the very most typical dog. Under a deterministic semantics duplicating a noun will have no effects. (Why?) What happens in the stochastic semantics when a noun is duplicated? Set up a situation in which one object is a better square than another, while a third is not a square at all. How do the literal and pragmatic listeners interpret "square" vs "square square"?
98+
99+
~~~
100+
~~~
101+
102+
### Expected meaning
103+
104+
Consider the following change to the literal listener:
105+
106+
~~~
107+
// literal listener
108+
var literalListener = function(utterance){
109+
Infer(function(){
110+
var obj = objectPrior();
111+
// condition(meaning(utterance, obj))
112+
var expectedMeaning = expectation(Infer(function(){meaning(utterance, obj)}))
113+
factor(Math.log(expectedMeaning))
114+
return obj
115+
})
116+
}
117+
~~~
118+
119+
We have relaced the (potentially stochastic) meaning function with the expected value of the meaning function. Convince yourself that this is an equivalent formulation. You might want to write down the probabilities involved explcitly; alternatively, you might want to reason about what a rejection sampler would do.
120+
121+
Once we have switched to expected meaning, it is reasonable to ask when we can push the expectation further into the meaning function. Can we replace the stochastic meaning of each word with a (deterministic, but real-valued) expected meaning? If we do so are composition laws preserved? Sometimes this is certainly the case: modify the conjunctive semantics above to use the expected truth value of each word. What operation replaces conjunction?
122+
123+
Assuming you have a meaning function that computes a stochastic Boolean result by composing together stochastic functions, can the expected meaning always be computed by first computing the expected meaning of the words and then composing in the same way? Why or why not?
124+
125+
126+
## Just "dog"
127+
128+
A complete sentence, that is intuitively true or false, requires a verb. For instance "There is a vicious dog on my lawn."
129+
But in the right context we can convey an equivalent meaning with a fragmentary utterance, for instance just "dog". what do we need from semantics for communication with fragments to work?
130+
131+
two solutions: noisy channel, "early readout" ala NNs
132+
133+

0 commit comments

Comments
 (0)