Showing posts with label Dialectology. Show all posts
Showing posts with label Dialectology. Show all posts

Monday, 6 January 2020

Accent Bias: Responses to Voices

Continuing our series of posts related to the 'Accent Bias in Britain' project, in this blog post we discuss some findings from our research which investigated current attitudes to accents in Britain.



In the most recent blog post, we explored the findings of the first part of our study which investigated attitudes to accent labels. The second part of our study, detailed here, investigated how people responded to recordings of speakers with different accents to see if the same accent bias exists in speech. 

To examine these questions, we recorded 10 speakers of 5 different accents (2 speakers each). These accents were Multicultural London English (MLE), Estuary English (EE), Received Pronunciation (RP), General Northern English (GNE), and Urban West Yorkshire English (UWYE). Speakers of these accents were recorded reading scripted mock interview answers. 

These recordings were then played to over 1,100 participants aged between 18-79 from across the country. The sample of participants was balanced for both ethnicity and gender. 

For each of the 10 mock interview answers the participants heard, they were asked to evaluate the candidate's performance, knowledge, suitability, and hireability for a job. Participants were asked to rate the candidate on a 10-point scale - where 10 is the highest. They were asked to respond to questions such as:

  1. “How would you rate the overall quality of the candidate's answer?”
  2. “Does the candidate's answer show expert knowledge?”
  3. “How likely is it that the candidate will succeed as a lawyer?”
  4. “Is the candidate somebody that you personally would like to work with?”
  5. “How would you rate the candidate overall?”
The participants also provided information on their age, social background, and education. 

When we analysed the results, we found a significant effect of the listener's age. Older listeners generally rated the two southern accents (MLE and EE) lower than all of the other accents. Younger participants, however, did not show this pattern. 

You can see this effect in the graph below. On the right are the older participants and on the left, the younger participants. The higher the line, the more positive the evaluation. As one can see, the ratings drop when you move from the younger respondents to their older peers. 

Is accent bias decreasing or is this just 'age-grading'?

This could mean one of two things. It could be that general attitudes to accents are changing, such that younger listeners will continue to exhibit the same accent preferences later on in life. On the other hand, it's possible that this could be evidence of age-grading. This is where young people might be more tolerant of accent diversity in their early years but become more critical as they get older.

A second finding of this study was that people's evaluations of accents in the responses to the interview questions depends on the type of question being answered. In questions that require a degree of technical or specialist knowledge, like those questions which asked specific details about law, all accents were rated more favourably. In more general questions, such as those which asked personal details or the work experience of the candidate, the accents were downrated much more.

Degree of expertise and accent rating

The effect of the 'expertise' required is shown in the graph above. The yellow line indicates 'expert' answers and the green line indicates 'non-expert' answers. As you should be able to see, all accents are rated much lower when the answer is a 'non-expert' answer than for an 'expert' answer. 

We also asked participants a series of questions aimed to test how prejudiced they were. We proposed that the more prejudiced people were, the lower their ratings of the different accents would be. In fact, this is exactly what we find. See the graph below. 

More prejudiced listeners were more likely to downrate all of the accents  

Those who reported they were more likely to be prejudiced towards different accents showed much lower ratings than those who were more likely to control their prejudice. The graph above shows ratings depending on MCPR (Motivation to Control a Prejudice Response). The blue line is those who reported that they are not prejudiced towards different accents, whereas the green line is those who report exhibiting more prejudice. 

What these results suggest is that there is a a systematic bias against certain accents in England (particularly Southern working-class varieties), whereas RP is evaluated much more positively and is perceived to be the most suitable for professional employment.

However, these results are reported for the general public. Would we see the same types of evaluations amongst those who are responsible for hiring candidates? In the next blog post, we explore this question. In the meantime, you can find our more about the project by visiting the project website


This summary was written by Christian Ilbury

Tuesday, 17 September 2019

You are what you Tweet!


In the time that it takes you to read this article, millions of users will have sent a Snapchat, uploaded an Insta Story and updated their Twitter profile. The age of digital culture is very much upon us. For Linguists, the contemporary networked society offers a way to explore language use beyond the traditional method of recording and interviewing speakers. This includes those studies which examine the dialectal distribution of words and features across different parts of the country. One such paper is Grieve and colleagues’ recent Twitter-based analysis of lexical variation in British English.

Traditionally, linguists interested in researching dialectal variation (i.e., linguistic features specific to a particular geographic region or group) have set about researching this topic by conducting surveys and interviews with speakers of a particular variety. For instance, a linguist might ask someone to name the “a narrow passageway between or behind buildings”. If you’re from the south, you might say ‘alleyway’ but northern speakers might call it a ‘snicket’ or a ‘ginnel’.

With the advent of social media, however, linguists no longer have to elicit these words directly. Rather, they can extract massive datasets of social media data to examine where in the country these words are used most.

In their 2019 paper, Grieve and colleagues used a corpus (i.e., dataset) of 180 million Tweets to examine lexical variation in British English. Helpfully, since tweets include what is known as ‘metadata’ that relates to the location in which the tweet was sent, Grieve and colleagues were able to plot these tweets on maps to identify where these words were most frequent. They compared their analysis with the more traditional approach taken in the BBC Voices project.

Their analysis very convincingly shows that the lexical variation observed in the Twitter data mirrors that identified in more traditional analyses! This finding is shown in the graphic below, where for all of the 8 words, the Twitter maps look comparable to those created for the BBC Voices project. For instance, consider the maps for the word ‘bairn’ – a word that means ‘child’ is typically heard in northern UK dialects (second row, right). The BBC Voices project map and the Twitter map are virtually indistinguishable. Across both maps, this word appears largely confined to the north/north-east of the UK – as expected.



Whilst, for the most part, the traditional dialect maps and the Twitter dialect maps look very similar, Grieve and colleagues note some differences. For instance, in the Twitter dataset, ‘bairn’ is observed to account for a maximum of 7.2% instances of the word ‘child’, even in the areas where it is stereotypically associated with that dialect. This is in comparison to the BBC Voices dataset, which reports a maximum of 100% of instances of ‘bairn’ for ‘child’ in some areas. Discussing the reasons for this difference, Grieve and colleagues explore several possibilities. First, they suggest that the differences may be related to a decline in usage of this word. It is possible that 'bairn' has simply become less popular over time. However, the decline in the use of this word also might have something to do with the type of data we get from Twitter and the way it's analysed in large-scale studies such as this. In particular, the authors note that it is impossible to examine the conversational context of the tweet. A such, it’s possible that’s there’s some contexts where users would use ‘child’ for ‘bairn’ even if they use the dialectal term ‘bairn’ in speech. For instance, if a user is reporting someone else’s speech.

Nevertheless, with these issues aside, Grieve and colleagues’ analysis suggests that the findings observed in large-scale dialectal surveys are largely mirrored in the Twitter data. As such, we can expect more and more sociolinguistic research to examine data from social media sites, such as Twitter in the future! So, it seems, you really are what you tweet!

------------------------------------------------------------

Grieve, Jack; Chris Montgomery; Andrea Nini; Akira Murakami & Diansheng Guo (2019) Mapping Lexical Dialect Variation in British English Using Twitter. Frontiers in Artificial Intelligence


This summary was written by Christian Ilbury

https://doi.org/10.3389/frai.2019.00011.