This is our wonderfully ambitious schedule.
We'll attempt to keep with it, but it is subject to modification.

Date Topic Readings
(* = to be read by this class.
All others are reference readings
)
Notices &
Reading Questions
9/24/15 Admin,
Selection of articles,
and Intro to Lang Acq
[Lisa]

(1) Jackendoff 1994: 3-34
[Ch 1, 2, 3]
(2) O'Grady 2005: 164-175
(3) Goodluck 2010 - nativist perspective on language acquisition
(4) O'Grady 2012 - review of alternate hypotheses to UG
(5) Yang 2010 Ms, Yang 2011 - Zipfian distributions in language
(6) Kuhl 2010 TED talk: first 10 minutes

Introductory message board discussion points due
9/29/15 Mechanism & Methods
[Stephen]

* (1) Marr 1982: Ch.1 (only pp. 24-29 [pp.12-14 of pdf])
* (2) Pearl & Goldwater (forthcoming)

Marr's Levels
(A1) Bechtel & Shagrir 2015: contributions of the three levels
(A2) Griffiths et al. 2015: between computational and algorithmic
(A3) Cooper & Peebles 2015: integrated cognitive architectures that use the three levels
(A4) Love 2015: algorithmic level
(A5) French & Thomas 2015: emergent structures and Marr’s levels

Mechanisms
(B1) Romberg & Saffran 2010: overview of infant statistical learning abilities
(B2) Aslin & Newport 2012: overview of infant & adult statistical learning abilities
(B3) Denison et al. 2013: probabilistic reasoning in infants
(B4) Roseberry et al. 2011: domain-general statistical learning in infants
(B5) Davis et al. 2011: probability matching in 10-month-olds
(B6) Gweon et al. 2010: infant sensitivity to the sampling environment
(B6) Dewar & Xu 2010: infant formation of overhypotheses
(B7) Lany & Gomez 2012: probabilistic learning (when perfect cues are not as helpful)

Experimental Methods
(C1) Kidd et al. 2010, Kidd et al. 2012: infant looking time preferences -- what controls it
(C2) Ambridge & Rowland 2013: experimental methods in language acquisition

Computational Methods
(D1) Clark & Sakas 2011: short overview of the utility of comp modeling
(D2) Pearl 2010: how & when to use comp modeling
(D3) Frank 2012 Ms.: overview of comp modeling in language acquisition
(D4) Bonawitz et al. 2011: simple sequential algorithm for approximating Bayesian inference
(D5) Perfors et al. 2011: Bayesian tutorial for cognitive development
(D6) Gopnik & Tenenbaum 2007: overview of Bayesian inference & cognitive development + Gopnik & Schultz 2007: humorous exchange about Bayes nets and cog dev
(D7) Kemp, Perfors, & Tenenbaum 2007: hierachical Bayesian overview
(D8) Perfors 2012: Thoughts on how to use Bayesian modeling
(D9) Orbanz & Teh 2010: reference for details of non-parametric Bayesian models
(D10) Tenenbaum et al. 2011: Bayesian inference for cognition
(D11) Griffiths et al. 2012: Neural nets vs Bayes
(D12) Abbott et al. 2012: Bayesian inference approximation
(D13) Jones & Love 2011: Bayesian fundamentalism or enlightenment + Marcus & Davis 2013 criticism
(D14) Frank 2013: utility of Bayesian models

Message board discussion points due
10/1/15 Speech Perception
[Georgina]

* (1) Feldman et al. 2013

Background
(A1) Werker 1995: background on phoneme perception
(A2) Swingley 2009: general overview, plus a section on the sounds of words
(A3) The Linguistic Genius of Babies (up through 10:15 is a good general overview)
(A4) Casserly & Pisoni 2010: general overview of speech perception & production

Experimental
(B1) Dietrich, Swingley, & Werker 2007: 18-month-old sound discrimination when in word context
(B2) Yoshida et al. 2010: 10-month-old infant phoneme discrimination abilities
(B3) Maye et al. 2002: infant sensitivity to bimodal distributions
(B4) Maye et al. 2008: update on infant distributional abilities (bimodal facilitation for phonetic learning)
(B5) Monahan & Idsardi 2010: biological plausibility of extracting phonetically relevant info from acoustic data
(B6) Feldman et al. 2011, 2013: human learner sensitivity to word context of a sound
(B7) Thiessen 2011: expt evidence that infants use word context when distinguishing sounds
(B8) Thiessen & Pavlik 2013: why minimal pairs really aren't helpful to children (holistic representation helps distinguish prefixes)

Computational
(C1) Feldman, Griffiths, & Morgan 2009: shorter version of Feldman et al. 2013
(C2) Vallabha et al. 2007: identifying vowels from acoustic data
(C3) Elsner et al. 2012: learning phon categories and words from child-directed speech
(C4) Adriaans & Swingley 2012: useful cues to phonetic categories
(C5) Martin et al. 2013: learning phonemes with proto-lexicons (simultaneous problem solving)
(C6) Dillon et al. 2013: joint learning of phonetic categories and phonemes

Message board discussion points due
10/6/15 Speech Segmentation I [Bailey]

* (1) Blanchard et al. 2010: cog plausible inference with phonotactic constraints

Background
(A1) Sondregger 2008 Ms: overview of infant word segmentation behavior & strategies

Experimental
(B1) Saffran, Aslin, & Newport 1996: infant probability tracking
(B2) Gomez & Gerken 2000: artificial language expts
(B3) Finn & Hudson Kam 2008: issues with adults in artificial language expts
(B4) Onnis et al. 2005: issues with adults in artificial language expts
(B5) Johnson & Tyler 2010: issues with infant sensitivity to trans prob
(B6) Lew-Williams et al. 2011: utility of isolated words for word seg
(B7) Mersad & Nazzi 2012: utility of familiar words in word seg
(B8) Willits et al. 2009: morpheme tracking
(B9) Kurumada et al. 2013: Zipfian distribution, using context, implementing chunking

Computational
(C1) Gambell & Yang 2006 Ms, Lignos 2011, Lignos 2012: algebraic learning + stress + probabilistic memory
(C2) Swingley 2005: using mutual information over syllables
(C3) Jarosz & Johnson 2013: comp analysis of distributional cues utility (useful when combined, but not separately)
(C4) Ketrez 2014: vowel harmony as statistical word seg cue
(C5) Daland & Pierrehumbert 2011: model based on diphones

Message board discussion points due
10/8/15 Speech Segmentation II
[Stephen]

* (1) Phillips & Pearl 2015

Experimental
(A1) Frank et al. 2010: Bayesian model matching human word seg performance

Computational
(B1) Phillips & Pearl 2012: constrained Bayesian word seg over syllables (shorter version of Phillips & Pearl 2015)
(B2) Goldwater et al. 2009: ideal learner Bayesian model
(B3) Johnson & Goldwater 2009: ideal learner Bayesian model
(B4) Pearl et al. 2010, 2011: more cognitively plausible algorithms
(B5) McInnes & Goldwater 2011: using acoustic input
(B6) Borschinger & Johnson 2011: particle filter for Bayesian seg
(B7) Phillips & Pearl 2014a, 2014b, 2015 Ms.: cross-linguistic Bayesian segmentation
(B8) Phillips & Pearl 2015: utility of segmentation output
(B9) Boerschinger et al. 2012: input size effects
(B10) Doyle & Levy 2013: learning stress patterns and segmenting at the same time (Bayesian)
(B11) Boerschinger & Johnson 2014: inferring stress constraints

Message board discussion points due
10/13/15 Word meaning:
Non-Overlapping Concepts
[Colin]

* (1) Frank, Goodman, & Tenenbaum 2009

Background
(A1) Swingley 2012: intro to word meaning learning, from the cog dev perspective

Experimental
(B1) Bergelson & Swingley 2012, 2014, 2015: early word learning
(B2) Graf Estes et al. 2011: constraints on word labels
(B3) Smith & Yu 2008: infant cross-situational learning
(B4) Medina et al. 2011: against cross-situational learning
(B5) Ramscar et al. 2011: for cross-situational learning, but with differences between kids and adults
(B6) Kachergis et al. 2012: active vs passive learning for word-meaning mapping
(B7) Yurovsky et al. 2012: word seg + word-meaning mapping in parallel
(B8) Smith & Yu 2013: visual attention & local effects in cross-situational learning
(B9) Yurovsky et al. 2013: utility of partial knowledge
(B10) Kachergis & Yu 2013: cross sit learning without 1-1 mapping
(B11) Romberg & Yu 2013: rich info structure in cross-sit learning
(B12) Romberg & Yu 2014: cross-situational learning vs. hypothesis-testing

Computational
(C1) Frank et al. 2012: using social cues for word learning
(C2) Fazly et al. 2010: probabilistic model of word-meaning mapping for more than just nouns
(C3) Stevens et al. 2013: pursuit of word meanings (word-meaning mapping) + word-learning commentary
(C4) Nematzadeh 2010: multi-word acq model
(C5) Nematzadeh et al. 2011: word learning + sem cat in late talkers
(C6) Nematzadeh et al. 2012: memory, attention, & word learning
(C7) Lewis & Frank 2013: Bayesian model of concept learning and word-concept mapping (follow-up for Frank et al. 2009)
(C8) Carstensen et al. 2014: rational model word-learning spatial relationships (extension of Frank et al. 2009)
(C9) Mollica & Piantadosi 2015: word learning cross-sit with recursion

Message board discussion points due
10/15/15 Word Meaning:
Overlapping Concepts
[Prutha]

* (1) Xu & Tenenbaum 2007

Computational
(A1) Gagliardi et al. 2012: incorporating grammatical category information
(A2) Jenkins et al. 2015: non-Bayes word learning for overlapping concepts (experimental + computational)
(A3) Meylan & Griffiths 2015: learning words from multiword utterances - Xu & Tenenbaum 2007 extension

Message board discussion points due
10/20/15 Grammatical Categories
[Galia]

* (1) Mintz 2003

Experimental
(A1) Mintz 2006: infant sensitivity to FFs
(A2) Syrett & Lidz 2010, Syrett et al. 2014: identifying more complex grammatical category information
(A3) Lany & Saffran 2011: other cues to grammatical category that children use

Computational
(B1) Wang & Mintz 2010: why FFs work
(B2) Chemla et al. 2009: the importance of frames vs. trigrams, FFs in other languages
(B3) Weisleder & Waxman 2010: Spanish FFs
(B4) St.Clair et al. 2010: Flexible frames (Mintz 2003 followup)
(B5) Stumper et al. 2011: German FFs
(B6) Liebbrandt & Powers 2010: Issues with Dutch FFs
(B7) Wang & Mintz 2008: online learning of FFs
(B8) Freudenthal et al. 2013: FF comparison with other framing metrics
(B9) Goldwater & Griffiths 2007: Bayesian categorization (ideal learner)

Message board discussion points due
10/22/15 Morphology [Blair]

* (1) Gagliardi et al. 2012

Experimental
(A1) Gagliardi & Lidz 2014: noun classification data
(A2) Demuth & Weschler 2012: noun classification in Sesotho + acquisition

Message board discussion points due
10/27/15 Morphosyntax
[Alandi]

* (1) Yang 2010 Ms
* (2) Faculty of Language commentary

Background
(A1) Piantadosi 2014: Zipf's law
(A2) Zipf's law video

Computational/Corpus
(B1) Yang 2011: shorter version of Yang 2010 Ms
(B2) Kowalski & Yang 2012: child vs adult usage of verbs
(B3) Yang 2013: productivity in children vs great apes
(B4) Pine et al. 2013: productivity in determiners (response to Yang 2013)

Message board discussion points due
10/29/15 Poverty of the Stimulus I: Intro
[Lisa]

* (1) Gerken 2006
* (2) Gerken 2010

Background
(A1) Pinker 2004: overview of poverty of the stimulus
(A2) Yang posts on positive & negative evidence at the Faculty of Language blog: 1, 2, 3

Experimental
(B1) Mueller et al. 2012: development of rule-learning in infants with auditory perception roots
(B2) Gervain & Werker 2013: non-adjacent rule learning in 7-month-olds

Message board discussion points due
11/3/15 Poverty of the Stimulus II: Against
[Ryan]

* (1) Pullum & Scholz 2002

Background
(A1) Crain & Pietroski 2002: difficult linguistic knowledge
(A2) Pullum 2011: more recent comments on Chomky's approach to UG + Brenchley & Lobina 2011: reply to Pullum's comments + video of Chomsky's comments in London
(A3) Yang 2015: The contribution of frequency in language acquisition

Experimental
(B1) Gamache & Schmitt 2012: compounds and issues with learning them
(B2) Ramscar et al. 2013: update for pov of stim with plural overregularization (expt & comp)

Computational
(C1) Legate & Yang 2002: quantifying the learnability of a particular phenomenon
(C2) Hsu & Chater 2010: another way to quantify the learnability of a particular phenomenon
(C3) Yang 2015 Ms: A-adjectives (negative, positive evidence)

Message board discussion points due
11/5/15 Poverty of the Stimulus + Syntax:
Structure Dependence
[Prutha]

* (1) Perfors, Tenenbaum, & Regier 2011

Background/Theory
(A1) Berwick et al. 2011: reply to Perfors, Tenenbaum, & Regier (among others)

Computational
(B1) Perfors, Tenenbaum, & Regier 2006: shorter version of Perfors et al. 2011
(B2) Reali & Christiansen 2005: statistical learning of y/n questions
(B3) Kam et al. 2008: problems with statistical learning of y/n questions

Message board discussion points due
11/10/15 Poverty of the Stimulus + Syntax:
Anaphoric One
[K.J.]

* (1) Pearl & Mis in press

Background
(A1) Payne et al. 2013: semantics-focused account of learning anaphoric one

Experimental
(B1) Lidz, Waxman, & Freedman 2003: 18-month-olds know anaphoric one

Computational
(C1) Pearl & Mis 2011: shorter version of Pearl & Mis in press
(C2) Regier & Gahl 2004: Bayesian learning of anaphoric one using ambiguous data
(C3) Foraker et al. 2009: Bayesian learning of anaphoric one using linguistic knowledge
(C4) Pearl & Lidz 2009: learners can only use some ambiguous data

Message board discussion points due
11/12/15 NO CLASS

Be working on the reading for next time

11/17/15 Poverty of the Stimulus + Syntax:
Syntactic Islands
[Blair]

* (1) Pearl & Sprouse 2013

Background
(A1) Phillips 2013: response to Pearl & Sprouse 2013

Experimental
(B1) Gagliardi et al. 2012 Ms: acquisition of filler-gap dependencies by young children

Computational
(C1) Pearl & Sprouse 2013 book chapter: focused on relationship to language processing
(C2) Pearl & Sprouse 2015: focused on applications in language development



Message board discussion points due
11/19/15 Rules of Language Use
[K.J.]

* (1) Goodman & Stuhlmuller 2013

Background
(A1) McNally 2013: Semantics & pragmatics review article

Computational
(B1) Frank & Goodman 2012 + supplementary material: pragmatic reasoning
(B2) Bergen et al. 2014: RSA + implicatures
(B3) Kao et al. 2014: nonliteral understanding of number words
(B4) Kao & Goodman 2015: verbal irony



Message board discussion points due
11/24/15 Complex Systems: Intro
[Galia]

* (1) Pearl & Lidz 2013

Background
(A1) Baker 2008: macro vs. micro parameters
(A2) Lasnik & Lohndal 2010: approaches to parameters
(A3) Lightfoot 2010: using cues to learn parameters
(A4) O'Grady 2005 pp.120-142: Some difficult syntactic phenomena
(A5) Lidz 2010: constituents, bare plurals, ditransitive verbs

Experimental
(B1) Nevins 2010: constraints on phonological grammars
(B2) Viau & Lidz 2011: ditransitive verbs
(B3) Becker & Estigarribia 2013: raising & control verbs

Computational
(C1) Mitchener & Becker 2011: learning about raising vs. control verbs
(C2) Orita et al. 2013: pronouns with discourse info (exp + comp)


Message board discussion points due
11/26/15 NO CLASS
Happy Thanksgiving!


12/1/15 Complex Systems, Statistical Learning, & UG
[Alandi]

* (1) Lidz & Gagliardi 2015

Background
(A1) Pearl 2014: how to use modeling to tell us about UG (response to Ambridge et al. 2014)

Experimental
(B1) Hadley et al. 2011: support for variational learning

Computational
(C1) Yang 2004, 2011: UG & statistical learning (variational learner)
(C2) Legate & Yang 2007: learning optional infinitives with variational learning
(C3) Freudenthal et al. 2010: against variational learning for optional infinitives
(C4) Freudenthal et al. 2015: MOSAIC model for OIs
(C5) Pelham 2011: ambiguity makes learning more difficult
(C6) Pearl 2008: unambiguous data in metrical phonology
(C7) Pearl 2009, 2011: learning metrical phonology parameters
(C8) Legate & Yang 2012: productivity & metrical phonology

Message board discussion points due
12/3/15 Peer review
[Everyone]

  • Upload your draft to the dropbox folder under SharedStudentFiles labeled "215L Writing Drafts" by 11:00am.

  • Bring your laptops to class so you can create text files with comments on other people's drafts. These will then be uploaded to the dropbox folder under SharedStudentFiles labeled "215L Peer Review".
    Assigned reviews are here on the message board.
    Comments due 12/4/15 @ 12:30pm.
12/8/15 Final presentations
Special time: 11:00am-12:30pm [Everyone]

  • Final presentation slides should be uploaded to the dropbox folder under SharedStudentFiles labeled "215L Final Slide" by 11:00am.

  • Final writing assignment should be uploaded to the dropbox folder under AssignmentSubmission labeled "215L Final Assignm." by 12:30pm.
  • Make sure to check the AssignmentReturn subfolder later on to see your score and relevant feedback for your assignment.