Finding out what music does to people – an interview with Makiko Sadakata

by Gisela Govaart, January 2015 

Makiko Sadakata is assistant professor of music cognition at the University of Amsterdam and the Radboud University Nijmegen. She studied Music Composition and Musicology at the Kyoto City University of Arts, and obtained a Ph.D. in Social Science at Radboud University Nijmegen. Her research focuses on auditory perceptual learning, music and language and neurofeedback.

Makiko Sadakata
Makiko Sadakata

“Before I got interested in music cognition, I studied music composition. Then I got more and more interested in how people listen to sounds. As a music creator I was really interested in what music does to people. For example, which music makes people happy? There was a course on music cognition at the university where I studied, taught by one of the leading researcher in Japanese music cognition field, Kengo Ohgushi. The course was very inspiring, so I decided to study more on the topic…, and ended up switching my focus completely from music composition to music cognition. That was a big change! I needed to learn a lot of new things like statistics, experimental methods, psychology.”

While I was doing my master in Japan, a researcher from Nijmegen, Peter Desain, visited my University for a sabbatical. He was working on categorical rhythm perception, with Henkjan Honing, which I found very interesting. Back then; I was also keen on speaking in English (which is not very easy to realize in Japan) and learning a new culture too. Combining all these “curiosities”, I strongly felt that I should talk to Peter Desain as much as I could. I think I was asking him to join me for lunch! For my master thesis project, I proposed to do a cross-cultural study on rhythm production, and Peter kindly invited me to come to do data-collection in Nijmegen. I remember I was very exited about this project.

When I first visited the Radboud University Nijmegen, I was quite amazed and surprised about the research environment. Since the Radboud University does not have a music department, the music research was carried out as a collaboration of researchers from different disciplines, e.g., psychology, mathematics, informatics, and musicology. I met with Henkjan Honing during this first visit too. I was amazed that people from different fields communicate with each other so easily in the Netherlands; at least, this has been my experience in both Nijmegen and Amsterdam. I had a really good time in the Netherlands and immediately thought that I would like to do my PhD with Peter Desain and Henkjan Honing.”

“The keywords that I am interested in, for a long time already, are learning and gaining fluency in processing of auditory signals. It does not really matter if this is in speech or in music. Of course, speech and music are different, but the level that I am dealing with at this moment is abstract categories (phonemes or notes) and I see a marked similarity between how we process it. The way in which we form sound categories depends on where you grew up, what kind of language background or music background you have. There are a lot of people looking for differences between language and music, but I am more interested in looking for overlapping mechanisms.”

“For my PhD research, I investigated the relation between cultural influence on how people perceive, produce and compose music rhythms. I was interested in looking at any dimension of music, but I was in the expertise group of rhythm, so I decided to focus on the temporal aspect of music. The Japanese language is known to be quite unique with regard to its speech rhythm: each syllable (mora) has a similar duration. What we found in our study is that this character is reflected in music: Japanese music tends to be “flatter”, playing in similar durations, with regard to its rhythm. Also Japanese musicians tend to perform rhythms in a flatter manner than Dutch musicians. We found a correlation between the rhythm of the language and performance/perception strategies of music rhythms. Of course, we cannot say too much about the causality based on correlational evidences though.

The same score rhythm can be heard or performed in different ways. Towards the end of my PhD project, we tried to address how our rhythm perception and production are associated, and how prior experience plays a role there. We applied a Bayesian formulation to explain rhythm perception production relationship, which is not quite advanced enough yet to explain complex variability, for example, the cultural bias, but has the potential to be extended to that.”

“After my PhD, I looked more into various aspects of learning. The first project I was involved was “PracticeSpace”, which aims at developing visual feedback system for learning to perform music instruments. The system could be used to learn to play the piano and drum, using different visualizations of sounds. When you learn to play an instrument, teachers can give different instructions, for example: ‘play as if you are floating’ or ‘play as if you are singing’. There are many such metaphors, but the interpretations of such metaphors could vary among different people. The idea of our project was to capture the acoustic signals that are associated with performance and project it in a visual display. In this way, people do not have to deal with “potentially vague metaphors”. But there is a big question of what visual features suits best to represent acoustic signals. Learning is known to depend on the skill level of the learner: for beginners, presenting a lot of information may hinder their learning, but an advanced player can incorporate more information. By testing advanced players, we found that people like to see the note-by-note representations such as timing and loudness of each note, but in fact, they improved best when they are presented with a higher-level interpretation of note relationships, such as “different feelings of drum grooves”.

Recently, we submitted a paper about applying this visualization technology to speech therapy in Parkinson’s patients. These patients have difficulty in understanding their own speech signal, such as loudness and pitch height. Research showed that presenting them with their pitch and loudness visually was helpful for their speech therapy process, however, the same study indicated that interpreting and integrating two graphs (of pitch and loudness) are sometimes hard for them. In collaboration with Sint Maartenskliniek in Nijmegen, we tried to find the best way to combine these two dimensions in one visual feedback for the patient population. I find this project very exciting because this gives me a feeling that the researcher can do something to give back to the society.”

“I am currently involved in the EarOpener project, which tries to provide neurofeedback based on categorical perception and to see if this helps learning of sound categories, such as phonemes or tone patterns. There have been many studies on auditory categorical perception and associated neural correlates (e.g., event related responses) and we are trying to apply this knowledge to design a new language learning system. For this, we use a so-called oddball paradigm. This paradigm presents a long sequence of a sound consisting of, let’s say ‘ra’ and ‘la’. When one sound ‘ra’ is presented 85% of the time and ‘la’ is presented less than 15% of the time, our brain is known to produces a signal for this odd tone, ‘la’. This response is like a marker, showing that the brain has detected a mismatch in sound sequence. The response is also known to depend on one’s categorical knowledge. For example, the difference between ‘ra’ and ‘la’ is very difficult for Japanese native speakers like myself. If we present this sequence to a Japanese native listener, I expect that the mismatch response would be much smaller than that of Dutch native listener. It is because Japanese language does not make a distinction between r and l while Dutch language does. We know that this type of response correlates with learning: while you are learning new categories, your mismatch response typically increases … even it sometimes precedes your behavioral response. It means that, you may not be consciously able to hear the difference between the sounds, but your brain already knows the distinction!

That is fascinating and it makes sense too. Our brain already starts to pick up the signal before we can consciously deal with it. In the EarOpener project we applied a classification methods to capture this brain response online. The responses are very small, but with a sophisticated classification algorithm we can tell the absence / presence of the detection signal with about the accuracy of 65%-75%. The idea is to present this absence/presence information to a learner as a feedback during learning and see if it influences the learning behavior. This project is quite exploratory, but very fascinating, and we do have some evidence showing that neuro feedback has an impact on their brain response during learning.”

“I am trained as a classical music player – piano is my main instrument – and I was in the conservatory to study composition. I was very good in grasping a musical structure right away. I could play well at first sight, and I was quite active as an accompanist when I was in Japan. Now all that has faded a bit, which is a pity. Every now and then I try to revitalize my music activity by planning little concerts with a friend. But research also brings new music knowledge to me, especially through teaching and research at UvA and ILLC. I learned a lot of nice musical pieces from musicology students. Also I started to like electronic dance music, which I never listened to before myself, via a research project with Aline Honingh. It is very nice that I am developing a new range of musical habits through my work.”

 

Slips of the hand – An interview with Roland Pfau

by Gisela Govaart, January 2015

Roland Pfau received his MA (1995) as well as his PhD degree (2001) from the University of Frankfurt. He joined the University of Amsterdam in May 2001 as assistant professor in General Linguistics. In his research, he focuses on the morphological and syntactic structure of sign languages as well as on processes of grammaticalization, often taking a typological and cross-linguistic perspective on these issues.

Roland Pfau
Roland Pfau

“There are a couple of things I remember I wanted to be when I was young. Amongst my earliest recollections, when I was like 5 years old, are garbage collector and skiing instructor. A few years later, I remember that I wanted to become a teacher. Then, in my teenage years, my dream was to be a shoemaker. I have a bit of a shoe fetish, and I always thought that it was so difficult for men to find nice shoes. I mean, nice shoes that are not ridiculously fancy, just a decent but slightly original shoe. Shoemaker was actually the last thing I wanted to be before things got serious and it turned out that I would proceed towards an academic career.”

“During my PhD, which was on slips of the tongue, my professor got interested in sign language. She had been approached by the state government of Hessen in the context of an ongoing debate in the state parliament about acknowledging German Sign Language as a minority language. She had worked on sign language a little bit before, but following this request, she got enthusiastic about the topic and found some money to offer sign language classes. A couple of colleagues and me took these classes, and at some point decided to set up a research group in order to study the structure of German Sign Language. This really had an impact on my research. I still think that the topic of my PhD is highly interesting, but what started as a hobby, the sign language research, soon really became the number one topic for me.”

“Later on, as member of the research group in Frankfurt, I also did some work on speech errors in sign language. Slips of the hand, as we call them, are very intriguing and highly informative, when it comes to the nature of language and how it is processed. Just like words, signs consist of smaller parts, such as a handshape, a location, and movement. The smaller parts of a spoken language, the consonants and vowels, are sequentially organized. In a sign language, however, at least to some extent, this is different. If you have a location and a handshape, than it is certainly not the case that you first articulate the handshape, and then the location. The question is whether these building blocks could still be separately affected in speech errors, despite their simultaneous nature. And the answer is yes; although these sub-lexical elements are organized differently, they are treated in a similar way. Similar to a consonant exchange like ‘redding wing’ instead of ‘wedding ring’, you can also get a handshape exchange, and just as in spoken languages, a slip may result in an existing sign, or in a non-existing but possible sign. However, slips of the hand, just like slips of the tongue, (almost) never result in an impossible sign, which would violate the phonotactic constraints of the language. Thus, when it comes to error types and error units, sign languages behave pretty much the same as spoken languages.”

“When you tell people what you do, sign language always triggers reactions. ‘Oh that is so nice, with sign language everyone can communicate in the same language’. Then, of course, I cannot hold back, and I have to point out that this is not true, as there are many different sign languages. Yet, in general, in my private life, I try not to annoy people too much with longish lectures about linguistic stuff.”

“In the early years of sign language research, the 1970s and 80s, linguists tended to stress the similarities between sign languages and spoken languages. They did that because it was important at the time to demonstrate that sign languages are fully-fledged natural languages, and in order to do so, it was crucial to show that they behave similarly to spoken languages when it comes to structural organization. Once this had been established, linguists began to pay more attention to the differences between sign languages and spoken languages. Most obviously, sign languages and spoken languages use different modalities of signal transmission: the visual-gestural modality for sign language versus the oral-auditive modality for spoken language.

The difference in modality leads to three important structural differences. The first one has to do with the lexicon: signs are more likely to be iconic than words. It is simply much easier to express an object or an action in an iconic way with your hands than with your voice. The second difference is phonological in nature. For the articulation of signs, you have two identical articulators: your two hands. There is nothing comparable in spoken language. The availability of two articulators would in principle allow a signer to simultaneously use them fully independently of each other in the articulation of signs. Interestingly, however, it has been shown that the full range of possibilities is not exploited in sign languages. For example, if both hands move in a two-handed sign, they have to be specified for the same handshape. This phonological rule has been shown to constrain the form of signs in all sign languages investigated to date. There are exceptions, but then we are not dealing with lexical forms, but rather with morphologically complex forms. For example, if my right hand is flat, representing a car, and the left one has the index finger extended, representing a person, then I can move them in parallel, yielding a morphologically complex expression.

The third difference is the use of space in front of you. This space is used in most, if not all, sign languages, for grammatical purposes, for instance, for the realization of pronouns. If I want to talk about my brother, who is not present, I would sign ‘brother’, and then associate an arbitrary locus in the signing space with my brother, by pointing. Later in the discourse, by pointing to the same location, I can refer back to the brother, that is, the pointing is interpreted as a pronoun meaning ‘he’. Once again, the abstract phenomenon is not different from spoken language pronominalization, but the surface form is clearly different, because the spatial strategy allows for a more concrete way of expressing this grammatical category.”

“When researchers first started to compare sign language structures, that is, when sign language typology was put on the research agenda in the late 1990s, this was really like a light went on. Before, everyone had been investigating his or her own sign language, but putting the patterns into a bigger picture is of course super fascinating. If you find structural similarities among sign languages, it is interesting, and if you find differences, it is maybe even more exciting. This is a bit of a research philosophy of mine, that I strive to put whatever I write about into a typological perspective.”

“Most of my research focuses on morphosyntax and syntax. To give a few examples, I have done work on agreement in sign languages and on question formation in Indo-Pakistani Sign Language. One of my favorite topics for many years has been negation. Sign language negation is intriguing, I think, because it involves a manual element, a manual negative particle, but also non-manuals. In this context, non-manual means a headshake, as we would also use in spoken language as a co-speech gesture. It has been demonstrated that in sign language, the headshake is really an integral part of the grammar of the language, not just a gesture. While the distribution of headshakes accompanying spoken utterances is quite random, across sign languages, it is systematic and rule-governed, and it differs from sign language to sign language.”

“Of course, I want to contribute to the body of knowledge about sign language. But ultimately, I hope that I can also give something back to the deaf community. Sign languages are threatened languages, and I therefore find it very important to make an effort to share research findings with the community. In a way, I do this as editor of the journal Sign Language & Linguistics, where we publish papers on sign language structure. Something that I have not done so much in the past, but that I hope to accomplish in a research project that started in February, is to also publish in a format that is more accessible to deaf people – not just scientific articles, but papers for a lay audience, hopefully also in sign language. Actually, in this project, which investigates argument structure in sign languages, we put in the budget money to produce summaries in sign language which will be made available online.”

 

‘A bridge between theoretical linguistics and the practice’ – an interview with Jeannette Schaeffer

by Gisela Govaart, March 2015

Jeannette Schaeffer is full professor Language Acquisition. She specializes in language acquisition by typical, impaired, and multilingual populations. She earned her MA degree from Utrecht University in 1990, her PhD degree from UCLA in 1997, and was a Postdoc at MIT until 1998. From 1998-2011 she worked at Ben Gurion University in Israel as an Assistant and an Associate Professor. She has been affiliated with the University of Amsterdam since 2011.

Jeanette Schaeffer
Jeanette Schaeffer

“During my MA in Utrecht, I wanted to go abroad, because I thought Holland was a bit boring, so I looked for ways to go abroad. For my MA thesis I wanted to make a comparison between Dutch and Italian child language. Since at that time, there was no CHILDES database or anything, I had to go to Italy to collect data. I started writing letters to funding agencies, and I got a few hundred guilders here and there — enough to go to Venice and to Rome for 5 months. After graduating I wanted to go back. So again, I started writing to those agencies and got money to go there for a year. I did linguistic research, practiced my Italian, and most of all I had a great year. I really liked Italy, but they had about one PhD position a year and they were not going to give that to a foreigner. My professors (Longobardi and Cinque, both at the University of Venice at the time) suggested going to the USA. They were, actually, without me knowing, famous linguists, and their recommendation letters got me in, I think. So it just sort of happened that I became a professional linguist. I wanted to be abroad, to do something different and this was a way to do it. Later, during my PhD, I really became passionate about linguistics, and could no longer imagine doing anything else.”

“I am very lucky that I have been working in so many nice institutes. I did my PhD at UCLA, and a post-doc at MIT, at the other side of the States. After that I had to find a real job, and there were not that many jobs around. The year I applied there were two jobs right up my street: one in Israel, and one in Jamaica. It took me a while to make the choice. Jamaica seemed like fun, but I knew that there are more serious linguists in Israel. I lived in Be’er Shebva, a town in the South of Israel, for about ten years. It was a very interesting period, both work-wise and socially, but it is not an easy country to live in, because of the eternal political and religious conflicts. After 4 or 5 years, I realized that people expected me to take sides, and that made it difficult as well. But the language situation is extremely interesting, as there is lots of bilingualism. In the 90s there was a huge immigration from Russia, after the fall of the Berlin wall. Especially in Be’er Sheva, there was a huge population, so we conducted a lot of studies on Russian and Russian child language.”

“Before I started to focus on impairments, I concentrated on on language acquisition in different languages with typically developing children. I compared Dutch with English and Italian; later, when I worked in Israel, I also included Russian and Hebrew child language. Later on, I became interested in how children with some sort of impairment acquire their mother tongue. Recently, I have been studying a group of children with specific language impairment (SLI), and a group of children with high-functioning autism. They seem to have very complementary impairments. Children with SLI are often characterized as children who mainly have a grammatical impairment, whereas children with autism are usually very good at grammar. Children with autism have other problems, mainly with the pragmatics of languages, the social communication. I was interested in seeing if I could show that grammar and pragmatics are two different components in language that can be impaired independently, because that would give some evidence for the idea that they are separate components of language.”

“I just finished analyzing all of the data, and it seems to be the case that there is a sub-group of children with autism that is only pragmatically impaired, who are good at grammar. There also is a group of children with autism that do have a grammatical impairment. With SLI, we see that all the 28 children we tested have a grammatical impairment, and a sub-group of about 10 or so has an additional pragmatic impairment. For this latter group there seems to be some co-morbidity of pragmatic and grammatical impairments in children with SLI. Yet, 18 children with SLI are grammatically, but not pragmatically impaired, and we see a lot less co-morbidity in children with autism. So the autism and the SLI subgroups show that grammar and pragmatics can be impaired independently. In that sense, I find evidence for the hypothesis that grammar and pragmatics are separate components.”

“I think the model of language consists of at least the lexicon, pragmatics, and grammar, which I would say is an umbrella term for phonology, morphology, syntax, and semantics together. I do not want to defend the modularity hypothesis like people did in the old days, like Jerry Fodor and Noam Chomsky did, because their modules were encapsulated and would not interact with any other module. I do believe there is interaction, and I am intrigued by the question of what the interaction and the division of labor is between all these components. My main research question really is how language and its different subcomponents interact with other parts of cognition. Where do I find correlations, where not? And then, if you find correlations, the big challenge of course is to find out whether this is a causal relationship.”

“Now in this new position as full professor I have to manage a lot. It is a challenge to keep it all together, especially in these turbulent times, but I am enjoying it. I am also getting to know the structure of the university better. I have only been in Amsterdam for 3 years. I still do not know everybody in the faculty. It’s a steep learning curve, but I surely enjoy getting to know all these new people.”

“One of my aims is to build a stronger bridge between theoretical linguistics and the practice, where the speech therapists work, and where teachers have to deal with children with language impairment and do not have enough knowledge to do that yet. This also has to do with this new law in Holland, ‘Passend onderwijs’, where more kids with impairments have to be in regular education. We theorists have to make our results applicable and usable for the people in the field. In Israel I had PhD students who were speech therapists themselves, who came to do PhDs with me. That was a beautiful way to build bridges with the practice. Another way to go is to organize workshops. Two years ago, I helped Anne Baker organize a “Practitioners Day”, tagged on to a bi-annual European workshop on Specific Language Impairment, which was held here in Holland. We invited all the speech therapists in Holland to this day. There we tried to make available what was presently being discussed on SLI among linguists and psychologists at a European level, but in more accessible terminology. I think more of these types of workshops are necessary.”

“My free time I try to spend with my family, do fun things with them. My children are 8 and 6, and my partner and me enjoy taking them to children’s tours in museums, for example. I try to give them an interesting, cultural education, and make them aware of the fact that the world is a very international place. Of course, Amsterdam is always very good for that. When I go out with my 8-year-old, she hears a lot of languages. She can actually distinguish them, because she grew up with a lot of languages: Dutch, English, Hebrew and French. Unfortunately, only the English and the Dutch survived.”

“Amsterdam is my final destination. I have been very lucky, because you cannot really choose your city in this type of profession. But if I had to choose a city in the world where I would have to end up, Amsterdam would definitely be in the top 3.”

Once a linguist, then a phonetician, soon a cognitive scientist – an interview with Kateřina Chládková

by Gisela Govaart, August 2015

Kateřina Chládková is a post-doctoral researcher in phonetic sciences at the Amsterdam Center for Language and Communication. She works within a project on speaker and accent normalisation in speech perception. During her PhD project (2009-2013) she investigated the perceptual plausibility of phonological feature categories.

Kateřina Chládková
Kateřina Chládková

“My current position here at the UvA is for just one year, so I have only two months left to finish what I am doing! The project is about speaker and accent normalization in speech perception. It is a joint project of the University of Western Sydney, where Paola Escudero (the project leader) is based, the University of Amsterdam and Leiden University. Here in Amsterdam, we test babies and adults, in Leiden they mostly do experiments with zebra finches, and in Australia they also test babies and adults. Our experiments are aimed at finding out whether listeners handle speaker differences between sounds and dialect differences in the same way. It seems that for dialects you obviously have to have the lexical knowledge, or linguistic experience to be able to normalize for those differences. But it might be the case that the speaker or gender normalization are something more fundamental in our auditory system, maybe not only for humans but for other species as well. That is why we use so many different groups: little humans, big humans, and birds.”

“I did my BA at Palacký University in Olomouc, Czech Republic. I actually started doing mathematics and English philology there, but after 1,5 years I dropped mathematics. I do not know whether it was the subject that was so difficult, or the approach in which it was taught. You had to memorize a lot of things and I did not like that. I stopped when I had my exam for geometry, together with my friend who was studying English and mathematics as well, and was also struggling with this subject. At the exam I just told her: ‘I am done with this, I am leaving, are you coming with me?’ And she did.”

“I continued with English and Dutch philology, but I was very much interested in linguistics and phonetics. Because I was studying Dutch, I did an Erasmus at Utrecht University, for half a year. That is how I got to know people at Amsterdam, and I decided to do my masters here, because it was not possible to do phonetics and linguistics in depth at my old university back then. I wanted to do the two-year research master originally, but I did not get a scholarship, so I chose the one-year master, to be sure that I would have enough money to survive.”

“My PhD project started really suddenly. After the MA, I was planning on going back to Olomouc, but then Paul Boersma got the VICI grant for the project ‘Emergent Categories and Connections’. I really liked the project, and applied for a PhD position. My project was supposed to be about how we learn categories in speech. But it turned out that we do not really know what these categories are, so I wanted to find out what the phonological categories are that we actually learn and use when communicating with people.”

“My project was about phonological features in speech perception. I did experiments with adults, behavioural perception and EEG experiments. Also, I simulated speech learning with neural networks. I found that real people listen to speech sounds in terms of phonological feature categories (and not only in terms of phoneme categories). A phoneme, like /p/ or /a/, is the linguistic representation that corresponds to what we experience as sound segments in speech. A feature corresponds to (most often) a smaller entity than a segment. For instance, the segment ‘a’ has several important frequencies, a particular duration and intensity, and some of these composite properties of the sound segment are, in the language user’s grammar, encoded as features. My simulations also show that the artificial brain is able to create feature categories on the basis of the input that it gets during language acquisition. In short, in my PhD project I showed that we have phonological feature representations, and that we use them when listening to speech and when learning it.”

“The idea that people have “feature detectors” already came up in the 1950s. But then quite suddenly researchers seem to have abandoned this idea, and started to argue that we listen to phonemes. So nowadays, when you say that language users have feature detectors, people not always believe you immediately. For example, I presented my findings at the University of California, at a phonetics and phonology seminar, and they seemed to be perceived as quite controversial. But that is actually a very good thing, because it heats up the discussion and can lead others to design their own experiments to challenge the findings.”

“I think it is quite important to include modelling in my research. If you design your experiments with humans very carefully, you can make quite robust conclusions about what kind of representations they have. But what you are actually looking for is what language in our brain looks like. And you cannot really look into the brain; normally, you cannot cut open your participants’ brains and see what happens in their neural networks. It is therefore important to include computational models in linguistic or psychology research. If you mimic the performance of real people with a model, then you have the model and you can look inside it.”

“Earlier in my research, I used Optimality Theory, which is one of the models for phonology, and in the PhD research I did modelling with neural networks. You never know which model resembles the human brain most closely, but it seems that neural networks are more likely to be like the human brain than Optimality Theory is. For instance, Optimality Theory assumes an infinite number of possible candidates for every single word that you are planning to produce or will perceive, and such an infinite database seems to be very implausible to be contained in the brain of a real person.”

“My decision to do PhD here was because of the excellent research group and their researcher approach. I do like the Netherlands, and I love the city of Amsterdam, but it still does not feel entirely like home. It feels like a second home; I am still rooted very much in the Czech lands, where I have close friends and family. Luckily, my boyfriend came with me when I started my PhD. I would not have started if he had not came along. He is a software developer, so he can find a job anywhere.”

“There have been a few moments when I was doubting whether I should stop doing research. During my PhD, sometimes I really did not see the end. When your experiment does not work, or does not turn out as you expected, and you have trouble interpreting the results. Or even when you are not able to find the right next question that you want to answer, because maybe your previous experiment gave a null result, you can get demotivated. Obviously, it is not so exciting when you are not getting nice results. There are many people who get null results from their experiments, and they get demotivated partly because these results are impossible to publish and then it looks as if they have done nothing. I think that it is important that these results are publishable as well: after all, they do tell us something about the question that they were designed for and should be made known to others (who can then try to replicate your experiment or try to include different variables than you did, for instance).”

“Before I started my PhD I would have called myself a linguist, during my PhD I would have said I was a phonologist and a phonetician, and now I think that I have mostly been a psycholinguist – well, actually a “psycho-phone-ist” if that exists. But what I would like to be is a cognitive scientist. I really want to get more grip on other domains of cognition than speech or phonetics, like attention or visual cognition. That is why I want to do my next project at a department of cognitive psychology. I like linguistics and phonetics very much, but I also want to learn new things and explore other fields.”

Making sense of fallacies – an interview with Robert van Rooij

 

by Gisela Govaart, September 2014

Robert van Rooij is professor of Logic and Cognition at the ILLC. He is best known for his work in the formal semantics and pragmatics of natural language (e.g. comparatives, conversational implicatures, presuppositions) and philosophical logic (including vagueness, truth, and conditionals). He also has a keen interest in the evolution of language.

Robert van Rooij
Robert van Rooij

 “Before I studied philosophy, I did horticulture. My father was a grower, and I thought I would step in his footsteps. During the weekends and the vacations I always worked in horticulture at home, until I was 20 or so. Many people were working there in the summer and on the weekends, and it was a lot of fun. But during that study I found out that I wanted to do something else. I was always interested in history and philosophy. I thought history was the worst study to get a job with, so I studied philosophy (which was maybe even worse). The two things I found most interesting there were philosophy of science and philosophy of language. But for me, studying philosophy of science did not make a lot of sense because I did not study a science before, so I decided to go for language and to study linguistics as well.”

“I try to change topics every once in a while. One feels freer if you are relatively new in the field. I like it that when you are still a bit naive on a new topic you dare to say things that somebody who has read all the literature on the topic would not dare to say. If you say something that is really absurd, you will find out soon enough. But if you try a lot of things, eventually something comes out that makes sense.”

“Causality is for me such a topic. I have never written anything about it, but I am going to teach about it, and hope to learn a lot from that. I will teach the course together with Katrin Schultz, my wife, who is more of a specialist on this topic. I can work together well with her. We know each other and we have the same academic interests. But she is actually also very critical, so if something I write is not good, she will be the first one to say so. I hope I do the same thing with her. The papers I wrote with her are my best quoted papers.”

“I started on vagueness because I was invited to write about it. I do not know why they asked me, but it was actually good. What is problematic about vagueness is that every natural language predicate seems to be vague. For instance, what is the meaning of ‘red’? Where does ‘red’ start and where does it end? To model that is difficult. If you use logic for the analysis of natural language, you try to translate each natural language sentence into a logical formula, and then you know what follows from which sentence. That is how you get the meaning of the sentences. The problem with vagueness is that you also want to hold on to certain intuitive principles. For instance, if somebody is small, and somebody else is one millimeter taller, then this second person should be small as well. But then you very soon obtain the result that everybody is small, and that everybody is not small. That is obviously a paradox. To get rid of that you have to think about what intuitive principles of logic to give up. That is what a lot of people in philosophical logic are thinking about. I have come up with one solution, and I am working that out. It turns out that this solution works for other paradoxes as well.”

“I am not only interested in how to model meanings, but I am also interested in how meaning evolved. I want to know how a language evolves so that it can compositionally account for meanings in communication. I am using evolutionary game theory, where you explain some of the phenomena that you find in language by saying that these properties have evolutionary advantages. You try to come up with a simulation on the computer and then the simulation shows that there is an advantage, for example of compositionality, so that compositional languages will dominate other languages.”

“With vagueness, the problem is that it is not helpful – you can use evolutionary game theory to prove that, under ideal circumstances, vagueness will never be beneficial. So, how can it be that language is vague? To explain that, you have to weaken certain idealistic assumptions standardly adopted in (evolutionary) game theory. What do you have to give up in order to have vague languages evolve? It turns out that the assumption that people make in behavioural economics to explain their data are very similar to the assumptions we need. When you put those in your simulations, vagueness will actually evolve.”

“When I did my PhD (which dealt with the analysis of propositional attitudes and anaphora) I was in an institute for computational linguistics, the Institut für Maschinelle Sprachverarbeitung in Stuttgart, Germany. I learned a lot of linguistics, and when I came to Amsterdam I actually did a lot of linguistics. But about 5 years ago, I realized that issues like finding out what the meaning of ‘even’ or ‘only is, were not really what I wanted to do. I went back a bit to philosophy, and a bit away from formal semantics. I am now using non-standard logics to investigate reasoning, and game theory and decision theory to investigate language. You might think that game theory and decision theory are really different from logic, because standard logic is qualitative, whereas game theory and decision theory use numbers. But I always thought that it is not just that you can use insights from game theory and decision theory to reason about what you should do and what you want others to do, but that game theory and decision theory are actually part of logic.”

“One prominent use of language is to convince others to do certain things. That is the area where probability theory and non-monotonic logic are more interesting than standard logic. In argumentation theory there is a lot of research about fallacies. You can use logic to show why a fallacy is not good reasoning. But of course we all do use these types of reasoning, and there must be a reason for that. One reason could be that we are all stupid, and that is, in a sense, true. But standard logic does not work well in such situations where you need to come up with alternative models. In decision and game theory there are more ways to slightly change things, to model people that are less smart, remember less and so on. But even more importantly, many fallacies do in fact seem to work. Using probabilistic reasoning you can actually show that, even though these fallacies do not yield logically valid conclusions, they often do make sense. I think that probability theory is going to be useful in pragmatics, because it can explain better than standard logic why arguments traditionally called fallacies are actually used so much.”

“In linguistics you only study pragmatics for cases in which the speaker and the listener agree with each other on what they want. But of course, that is the ideal case, and typically if you talk with someone, he or she does not share all your goals. Game theory was developed to study situations where people do not have the same interests. It seems only natural that you use this for communicative situations, as a way to enlarge the scope of pragmatics, which is also something that I try to do in my research.”

Robert van Rooij
Robert van Rooij

Music as a second language – an interview with Paula Roncaglia-Denissen

by Gisela Govaart, August 2014

Paula Roncaglia-Denissen is a postdoctoral researcher at the Music Cognition Group at the UvA. She grew up in Brazil, studied linguistics in São Paulo and Rio de Janeiro, and did her PhD at the Max Planck for Human Cognitive and Brain Sciences in Leipzig, where she studied the interaction between speech rhythm and syntax in second language learners. She now works in the NWO-Horizon project ‘Knowledge and Culture’.

Paula Roncaglia-Denissen
Paula Roncaglia-Denissen

“I did my master project at the Max Planck Institute in Leipzig and then realized that I wanted to stay in research and continue the work I started there. My supervisor said she would like to have me in her team, but I would have to find my own financial support for it. So I started applying to different PhD programs, and finally I got accepted in a program called MaxNet Aging. The MaxNet Aging is an interdisciplinary research program from the Max Planck Society created to investigate how age effects are perceived and addressed by different disciplines, such as linguistics, psychology, history, ethnology, and law. It was a very nice program. I profited a lot from this interdisciplinary background, and I made some of my very good friends during those years.”

“In the first year of my PhD I met my husband, who is Dutch and worked in Berlin back then. Two years into my PhD I knew we would get married, and that we would be going back to the Netherlands because of his job. I did everything I could to collect my data in the second year of my PhD. I worked 7 days a week at times to test all participants I needed, but I was decided to be close to him, so I did what it took. In my last year I just had to analyse my data and write my dissertation, so I could be with him in the Netherlands.”

“As people say: it is not research what scientists do, but me-search! Learning a second language is a big part of my life and my interest. When I was a teenager and learned English I realized that speaking a language opens up a whole new world of possibilities and I wanted to have access to them. Then I decided to study linguistics and I also learn German, studied Spanish, Romanian. Dutch came into my life because of my husband. Since I learned my first foreign language, I have always been interested in why some people are better in learning a language then others and how one could learn a language with more ease. Also, my private life shaped my professional life the minute I met my husband. From that moment onwards my life took a specific turn. I would not be in the Netherlands, and probably I would not be doing this job, if it was not for him. So I focused my research to fit my personal life. Back then it was the first time I have done such a thing, and at times it felt difficult, but for me it made back then and still does now.”

“For me, the Netherlands is a softer side of Germany. People are more relaxed here. It is somewhat in between Brazil and Germany: people work just as hard as in Germany, but they also know how to enjoy themselves and they have free time and they value that. I think that is a good balance.”

“When I was pregnant with my first child, and also almost finished with my PhD, I decided I wanted to continue in research, but did not want to work in any job, just for working. It would have to be something that I really loved; otherwise I would have taken a time off to take care of my child. When I found out about the position I have right now, I fell madly in love with it. I applied only for this position and no other else. I was told I was not the first candidate though: they had somebody with a stronger music background. But the position was supposed to be mine! It turned out the other person did not take it. Since then, I am here.”

“I am very much an outsider in musicology, so sometimes I really feel like an intruder. The musical exposure that I had was that I played the recorder for 6 years when I was very little and I danced classical ballet for 9. I came in contact with music again through an indirect way, via linguistics, because I was interested in speech prosody. For second language learning this is the icing on the cake, because even if you achieve all the proficiency you can possibly get in all different linguistic levels of the second language, prosody might still give you away and reveal that you are not native. I wanted to know why it is so hard to acquire prosody, and how can we facilitate prosody learning for people that are acquiring a second language.”

“When I did my master I fell in love with speech rhythm. I thought it was awesome that rhythm would give away so much about a language. Studies with non-human primates, rats, and babies show that they are able to distinguish languages based on the rhythmic properties only. The explanation is that speech rhythm is based on acoustic features like duration and intensity, which are also present in music. I figured that if speaking languages with different rhythmic properties trains the ability to perceive these durational and intensity features in language, then maybe this could be translated into an effect on the perception of rhythm in music as well. And indeed, we found out that people who speak languages with different rhythmic properties are better in perceiving rhythm in music.”

“This finding wasn’t my original PhD goal, it was something that I thought of along the way. But it was very fortunate, because that is how I got back into music. I learned a lot about music during my PhD, and I am still learning, but I always try to be very humble towards it. I still feel that I have to apologize all the time if I am making claims about music to musicologists (laughter). But it is also a nice thing to get out of your comfort zone!”

“For me, musicality is the ability to get into music and to enjoy music; to recognize music as such and to resound to it. So if you can resonate to the music, than you are musical. I have a very mainstream taste in music, namely pop. I also listen to a lot of Brazilian music, such as samba, bossanova and Brazilan popular music (MPB).”

“In my current job I am investigating what syntax in language and music might have in common. There is a very popular and well accepted hypothesis proposed by Annirudh Patel, saying that even though the elements and their representations are different, the resources being used to process language and music are the same. There is a lot of supporting evidence for this hypothesis and it makes everybody very happy, because it keeps domain specificity but it still accounts for a domain general mechanism underlying the processing of language and music. It says: every domain has its specificity, but whenever these things are integrated, we have this common parser. Recently, however, research came out suggesting that music and language do not share the same integration mechanism. This research suggests that when you are processing language together with music, your attention is actually divided. So, whenever you have a violation in language and music, it is harder to process because your attention splits. Right now I want to investigate if it is the attention or the parser that could account for the literature findings. ”

“My child is 1,5 year old, and I am expecting a second one, who should be born in November. My daughter is growing up bilingually. I guess I like studying bilinguals so much that I wanted to create my own cohort (laughter)! Most of her words right now are in Portuguese, because she spends most of her time with me. My husband and me speak English and Portuguese at home. Dutch is spoken only when we are together with my in-laws.

Raising my daughter bilingually is actually a lot harder than I thought it would be, because it takes more time, energy, patience than one thinks. Also, there are a lot of things you have to commit to. Such as making my peace with the fact that Dutch is probably going to be my daughter’s dominant language because her whole schooling will be in Dutch. People believe that if you are raised bilingually, you learn two languages perfectly and you are just as good in both languages in every context of your life. This is actually not true. There is always going to be one language that is the strongest one in certain context and situations. The fact that she will probably prefer to speak in Dutch rather than in Portuguese is still a bit difficult for me to accept. Nothing against the language, which I find fun to speak and interesting to learn, by the way. It is just an odd feeling to know that what I consider a foreign language and have a hard time expressing some thoughts and feelings will be the most comfortable one for my children. But for me, it is very important that my children learn Portuguese, it is my language and my culture, and they should have access to the world where I and half of their family comes from. Sometimes I am speaking to my daughter and she answers using Dutch words. It is difficult, but I pretend that I do not understand what she means and repeat it in Portuguese.”

“I am still very much interested in how we learn a second language. In my opinion, music is a kind of second language for a lot of people. I would like to know why some people are better at learning a second language than others, even though they might have the same training, exposure and background. By understanding these little differences between people, how they process music and language together or separately, we can understand what makes one better at it than the other. Also, music and language are uniquely human features. By understanding what they have in common, we can also understand what makes us human.”