Monday, 16 March 2020

#BlackLanguageMatters: Can linguistics change the course of justice?

The 2013 trial of George Zimmerman for the murder of unarmed Black teenager Trayvon Martin is well-known as the court case that sparked the #BlackLivesMatter movement in the USA. 17-year-old Martin was shot dead by Zimmerman, who claimed he was acting in self-defence and was eventually acquitted of all charges. The outcome of the trial caused outrage among the Black community over racial profiling, police brutality and inequality in the criminal justice system, and prompted the founders of #BlackLivesMatter to use the hashtag for the very first time.

It’s less well-known that the case also served as a ‘call to action’ among linguists. John Rickford and Sharese King of Stanford University studied the court proceedings closely, focusing on the testimony of one particular witness, Rachel Jeantel. A close friend of Martin, Jeantel was on the phone to him just moments before his death. As such, she represented an important ‘ear-witness’ and testified for over 6 hours in court, but her testimony was completely disregarded by the jury, who found her to be unintelligible and ‘not credible’.

What does this have to do with linguistics? Jeantel is a speaker of African American Vernacular English (AAVE), also known as African American Language (AAL): a variety of English spoken by many Black Americans. AAVE has been studied extensively by linguists, who have shown that it is a systematic and rule-governed dialect of English like any other. Nevertheless, like most ‘non-standard’ vernaculars, AAVE is often stereotyped by non-linguists as uneducated and broken. Jeantel’s speech is no exception: Rickford and King note that she was ridiculed on social media throughout the trial, labelled as ‘inarticulate’ and ‘the perfect example of urban ignorance’.

As well as being lampooned online, Jeantel’s testimony was overlooked by the jury in their decision-making. Commenting after the case, one juror said that Jeantel was ‘hard to understand’, and another reported that ‘no one mentioned Jeantel in [16+ hour] jury deliberations. Her testimony played no role whatsoever in their decision’ (Juror Maddy, as reported in Bloom 2014) despite the fact that she was on the phone to the defendant moments before the shooting took place.

In their paper, Rickford and King set out to investigate linguistic reasons for why this happened. They start by closely analysing over 15 hours of Jeantel’s recorded speech, to see how it compares to that of other AAVE speakers. They found her speech to be ‘a systematic exemplification of the grammar of AAVE’. In other words, it displays patterning in lexicon, grammar and phonology that is typical of AAVE, and also reflects the possible influence of Jeantel’s Haitian mother and Anglophone Caribbean Creole-speakers living in Miami. Given these findings, the possibility that Jeantel was not understood because her speech was incoherent – or, as one commentator described it, ‘the blather of an idiot’ – is clearly ruled out. Why, then, did the jury neither understand Jeantel nor consider her testimony to be important in their deliberations? Rickford and King look at two possibilities in their paper: the influence of social bias and the issue of dialect unfamiliarity.

It is likely that social bias had an effect on jurors’ ability to understand Jeantel as well as their assessment of her credibility. Rickford and King cite several studies that show that ‘speech perception is influenced by listeners’ stereotypes of speaker characteristics’ – in other words, if White listeners believe that a speaker is Black, their comprehension actually decreases. Importantly, the Zimmerman trial jury was primarily White, middle-aged and suburban, with no African American members.

Considering dialect unfamiliarity as a factor, Rickford and King list a number of other court cases in which vernacular language has been misheard or mistranscribed. Part of the problem, they explain, is that courtrooms do not provide interpreters for dialects, but only for ‘foreign languages’. In other words, an interpreter would be provided for a Spanish or Vietnamese-speaking defendant, for example, but not offered to a speaker of Bajan Creole or AAVE. Depending on the dialect in question, this can lead to dangerous misunderstandings: Rickford and King give the example of a police interview in which a Jamaican Creole speaker’s words, given verbatim in (a), were first transcribed as in (b).

(a) wen mi ier di bap bap, mi drap a groun an den
when I heard the bap bap [the shots], I fell to the ground and then
mi staat ron.
I started to run.
(b) When I heard the shot (bap, bap), I drop the gun, and then I run.

As this example shows, the distinction between ‘languages’ and ‘dialects of a language’ is not always clear-cut, and listeners are likely to have difficulties with comprehension if they are not familiar with the variety being spoken. In the Zimmerman case, Rickford and King show that Jeantel used several preverbal tense-aspect markers in her speech, such as stressed BIN, completive done, and habitual be. The authors point out that these features of AAVE have been mis-transcribed by non-AAVE speakers in other cases, meaning that it is very likely they were misunderstood in this case too.

Rickford and King conclude that AAVE was, in a way, ‘found guilty’ in the Zimmerman trial, since responses to Jeantel’s dialect unfairly prevented her testimony from being heard or properly understood, and undoubtedly affected the outcome of the case. In light of this, they argue that courtrooms are in serious need of expert linguistic input and dialect interpretation, and strongly urge linguists to help make courtrooms fairer places. More broadly, Rickford and King point out that language prejudice affects outcomes not only in the criminal justice system, but also in education, employment and healthcare, and call on linguists to dispel myths about speech and language in all domains of life.


Bloom, L. (2014). Suspicion nation: The inside story of the Trayvon Martin injustice and why we continue to repeat it. Berkeley, CA: Counterpoint.

Rickford, J. R., and King, S. (2016). Language and linguistics on trial: Hearing Rachel Jeantel (and other vernacular speakers) in the courtroom and beyond. Language 92/4, 948-988.

This summary was written by Rosemary Hall

Monday, 17 February 2020

"Thanks, no problem, pleasure, don't mention it, thanks"

I once heard that how someone treats a waiter can say a lot about their character. What about the way a waiter responds? Researcher Larssyn Rüegg thinks that there may be differences in how waiters respond to their customers’ thanks, based on the kind of restaurant they are in.

While previous research has looked into how various languages may differ in this pragmatic function of the thanks response, none so far has looked into how thanks response might vary within a single language. Rüegg's research is based in part on a previous work by Klaus Schneider who typified different forms of thanks responses. An example is the welcome type which include a spoken phrase such as 'you're welcome', or even just 'welcome'. Other types include okay, anytime, no problem, pleasure, don't mention it, thanks, yeah, sure, and don't worry about it. Rüegg extends this study by asking what influences these types of response. She identifies two potential factors: socio-economic setting and the type of favor.

It is strongly supported by research that service staff tend to select a style of speech deemed appropriate to their clientele, so their speech would therefore reflect social stratification. Based on this, Rüegg decided to use a corpus of naturally occurring talk in restaurants of different price ranges to exemplify different socio-economic settings. This corpus, the Los Angeles Restaurant Corpus (LARC) contains three categories, LARC-up, LARC-mid, and LARC-low, each reflecting their price range.

The first finding from this study is as we would expect: thanks responses in LARC-up and LARC-mid were 50% more frequent than that of LARC-low. Yet, even the frequency of thanks responses in LARC-up and LARC-mid are quite low, with expressions of thanks being responded to less than 25% of the time.

The form of thanks responses also differs across the socio-economic categories. For example, the most common response types in LARC-up and LARC-mid, such as welcome and thank you, are not found in LARC-low. Furthermore, customers in the LARC-low restaurants use thanks responses that are not present in both LARC-up and LARC-mid, such as yeah, and absolutely. Interestingly, LARC-mid display the most variation in types of thanks responses.

The type of act which waiters are thanked for shows distinctive patterns as well. A non-verbal service act elicits the most thanks responses in LARC-up and LARC-mid. Such acts include clearing or setting the table, or perhaps bringing the bill. Interestingly, such acts never elicit a thanks response in LARC-low. Enquiries by the service staff about the guests' well-being do not elicit a thanks response in LARC-low either. Serving food or drinks is correlated with socio-economic setting, with customers in LARC-up giving the most thanks responses, and those in LARC-low the least. On the other hand, verbal offers of service such as Do you need more wine? Anything else? more consistently generate thanks responses across all categories.

Through this research, we can see that thanks responses in English are not very frequent on the whole. This is in contrast to some other languages. In addition, the sensitivity of thanks responses to socio-economic setting suggest that they are a subtle form of cultural encoding, with common responses in LARC-up and LARC-mid restaurants possibly signalling formality. Furthermore, thanks responses do not appear to be very standardized, with a wide range of forms being used, especially in LARC-mid and LARC-low. The fact that the type of service performed elicits differing thanks responses across the different socio-economic settings reinforces the sense that these small linguistic acts are actually a rich form of interactional management and cultural signalling.


Rüegg, Larssyn. 2014. Thanks responses in three socio-economiuc settings: A variational pragmatics approach. Journal of Pragmatics 71: 17-30

This summary was written by Darren Hum Chong Kai 

Monday, 3 February 2020

The Power of Babble

"Ma-ma, ba-ba, da-da" - you probably associate sounds such as these with babies, in particular the babbling that babies make when they're first acquiring language. But what do these sounds do? And why do babies babble? This is a question that some recent research has addressed.

In their recent research report, Elminger, Schwade and Goldstein examined the function of babbling in infants’ language development.  They explored the idea that a caregiver’s response to their child’s vocalizations is key to the beginnings of communication and found that infants themselves may actually be in charge of this process.  By 5 months old, babies will babble and expect their adult caregiver to reply and by 9 months, they will begin to produce more speech-like noise once the adult responds to them.  Previous research has suggested that parents’ speech will match the child’s current age, changing as the child grows.  A baby’s most varied ‘pre-speech’ repertoire of sounds is between 9-10 months and this is when a parent’s speech is most sensitive to their child’s vocalizations.

The researchers focused on this age group and were interested in further investigating the relationship between the adults’ and infants’ vocalizations by closely examining adult speech in response to infant babble. They used three measures to assess the type of speech parents used to respond to babbling:  Firstly, they counted the number of different types of words that were used; secondly, they counted the average number of words in the responses and thirdly, they calculated how many of these responses were just a single word.  There were thirty mother-infant pairs who participated in the study and they were recorded in a naturalistic environment, as the child played, over two thirty minute sessions.  The researchers split the adult responses into two different categories: ‘contingent’ which were immediate, direct responses to the child’s babble and ‘non-contingent’ which did not occur within two seconds of the babbling.

Overall, the investigation showed that the mothers produced less contingent than non-contingent speech and that the contingent speech consisted of significantly shorter utterances with simpler words.  They also found that there were more single-word contingent utterances than non-contingent. So, in general, it seems that parents may simplify the whole structure of their speech in response to their child’s babble, suggesting that infant babbling really does influence the adult response. It may be that this immature, pre-speech babble is actually engineered by the child to create language learning opportunities through eliciting simplified, easy-to-learn responses from their caregiver.  In fact, it seems that infant babbling in general is indicative that learning is happening:  It has previously been found that infants more accurately remember the features of objects at which they have babbled than those that have been looked at and handled but not babbled at.  So, when an adult responds vocally to babbling, the already alert child will quickly learn the patterns of their speech. 

Overall, these results show that children learn to recognise language much more quickly when the information they need to do so is presented immediately on babbling.  During the first year of their life, infants associate their babbling with a response from their caregiver which will guide their learning and speech development.  So, unlike the Tower of Babel,  fabled to have been built to divide people linguistically, in this study the power of babble is shown to rely on infant and caregiver closely working together.


Elmlinger S.L.; J.A. Schwade & M.H. Goldstein. 2019. The Ecology of prelinguistic vocal learning: parents simplify the structure of their speech in response to babbling. Journal of Child Language. 16:1-14.

This summary was written by Gemma Stoyle

doi: 10.1017/S0305000919000291

Monday, 20 January 2020

Accent Bias: Voices at Work

Continuing our series of posts related to the 'Accent Bias in Britain' project, in this blog post we discuss some findings from our research which investigated current attitudes to accents in Britain.

In our last blog post, we explored some of the findings of the second part of our study which investigated how the UK public evaluated 5 different accents in mock interviews. The third part of our study, detailed here, investigated whether people in positions of power such as recruiters would exhibit the same type of accent biases. 

Our study focuses on a profession that has been previously described as lacking diversity, Law. We were interested in examining whether accent bias interferes with judgements of professional skill. In other words, would a candidate with, say a Multicultural London English accent, be perceived as less professional or competent as their Received Pronunciation speaking peers? 

To investigate this question, we played the same mock interviews as described in our last blog post to 61 legal professionals.We prepared 10 short mock interview answers, varying between ‘good’ and ‘poor’ quality. Before we conducted the experiment, these answers were independently judged as 'good' or 'poor' by a group of 25 legal professionals otherwise unrelated to the project.

To create the mock interviews, we had 10 speakers (2 of each accent) record 10 good and 10 poor interview responses. This resulted in 100 recordings. The accents we tested were: Multicultural London English (MLE), Estuary English (EE), Received Pronunciation (RP), General Northern English (GNE), and Urban West Yorkshire English (UWYE). 

From the 100 recordings, our 61 legal professionals heard a random selection of 10 interview answers. They were then asked to evaluate whether they thought the answer was a 'good' answer or a 'poor' answer. They were asked to indicate this on a 10-point scale, responding to the following questions:

  1. “How would you rate the overall quality of the candidate’s answer?”
  2. “Does the candidate’s answer show relevant expertise and knowledge?”
  3. “In your opinion, how likely is it that the candidate will succeed as a lawyer?”
  4. “Is the candidate somebody that you personally would like to work with?”
  5. “How likely would you be to recommend hiring this candidate?”
When we analysed our data, we identified a surprising effect. Whilst the general public displayed a great deal of accent bias in judging the competency of a job candidate, the lawyers did not follow this pattern. In fact, the professions did not show significant preferences for Received Pronunciation (RP) or General Northern English (GNE), nor did they show a consistent dispreference for working class or non-white accents. Instead, they showed a consistent ability to judge the quality of an answer as 'good' or 'poor' regardless of the accent the answer was presented to them in. Their answers very closely matched the answers given by the group of professionals who rated the quality of the written answers. 

The graph above shows this effect. The high quality answers are in yellow and the lower quality answers are in green. As you should be able to see, across the five different accents (see the x-axis on the bottom), the ratings remain relatively the same. At the same time, however, it is worth noting that EE & MLE receive the lowest ratings of all the accents. 

Note, however, that RP is also lower rated than some of the other accents. This is surprising given that RP was evaluated as the most prestigious accent in the label study. It's possible that this ranking might be related to the association of RP with a higher level of education, so there is a greater expectation of these individuals. 

It is also interesting to note that some of the social factors seen to effect the general public's responses do not seem to influence the professional's judgements. The age and regional origin of  legal professionals did not affect how they responded to job candidates, unlike what we found among the general public. Their Motivation to Control a Prejudiced Response (MCPR) - a psychological factor that had a strong effect on how listeners behaved in our public survey - also did not effect their ratings. 

Our findings therefore suggest that when legal professionals are asked to judge the suitability of a candidate, they are able to switch off biases and attend very well to the quality of an answer, judging the competency of the individual independently of their accent. 

Of course, however, the current study simulates just one small part of hiring candidates. It doesn't look at accent bias in other aspects of professional life, like informal interaction during the interview or everyday experiences on the job. So, it's possible that accent bias might influence the candidate's progression later on down the line. 

At least in terms of hiring though, it looks like it's relatively good news for speakers of regional and 'non-standard' accents! 

This summary was written by Christian Ilbury

Monday, 6 January 2020

Accent Bias: Responses to Voices

Continuing our series of posts related to the 'Accent Bias in Britain' project, in this blog post we discuss some findings from our research which investigated current attitudes to accents in Britain.

In the most recent blog post, we explored the findings of the first part of our study which investigated attitudes to accent labels. The second part of our study, detailed here, investigated how people responded to recordings of speakers with different accents to see if the same accent bias exists in speech. 

To examine these questions, we recorded 10 speakers of 5 different accents (2 speakers each). These accents were Multicultural London English (MLE), Estuary English (EE), Received Pronunciation (RP), General Northern English (GNE), and Urban West Yorkshire English (UWYE). Speakers of these accents were recorded reading scripted mock interview answers. 

These recordings were then played to over 1,100 participants aged between 18-79 from across the country. The sample of participants was balanced for both ethnicity and gender. 

For each of the 10 mock interview answers the participants heard, they were asked to evaluate the candidate's performance, knowledge, suitability, and hireability for a job. Participants were asked to rate the candidate on a 10-point scale - where 10 is the highest. They were asked to respond to questions such as:

  1. “How would you rate the overall quality of the candidate's answer?”
  2. “Does the candidate's answer show expert knowledge?”
  3. “How likely is it that the candidate will succeed as a lawyer?”
  4. “Is the candidate somebody that you personally would like to work with?”
  5. “How would you rate the candidate overall?”
The participants also provided information on their age, social background, and education. 

When we analysed the results, we found a significant effect of the listener's age. Older listeners generally rated the two southern accents (MLE and EE) lower than all of the other accents. Younger participants, however, did not show this pattern. 

You can see this effect in the graph below. On the right are the older participants and on the left, the younger participants. The higher the line, the more positive the evaluation. As one can see, the ratings drop when you move from the younger respondents to their older peers. 

Is accent bias decreasing or is this just 'age-grading'?

This could mean one of two things. It could be that general attitudes to accents are changing, such that younger listeners will continue to exhibit the same accent preferences later on in life. On the other hand, it's possible that this could be evidence of age-grading. This is where young people might be more tolerant of accent diversity in their early years but become more critical as they get older.

A second finding of this study was that people's evaluations of accents in the responses to the interview questions depends on the type of question being answered. In questions that require a degree of technical or specialist knowledge, like those questions which asked specific details about law, all accents were rated more favourably. In more general questions, such as those which asked personal details or the work experience of the candidate, the accents were downrated much more.

Degree of expertise and accent rating

The effect of the 'expertise' required is shown in the graph above. The yellow line indicates 'expert' answers and the green line indicates 'non-expert' answers. As you should be able to see, all accents are rated much lower when the answer is a 'non-expert' answer than for an 'expert' answer. 

We also asked participants a series of questions aimed to test how prejudiced they were. We proposed that the more prejudiced people were, the lower their ratings of the different accents would be. In fact, this is exactly what we find. See the graph below. 

More prejudiced listeners were more likely to downrate all of the accents  

Those who reported they were more likely to be prejudiced towards different accents showed much lower ratings than those who were more likely to control their prejudice. The graph above shows ratings depending on MCPR (Motivation to Control a Prejudice Response). The blue line is those who reported that they are not prejudiced towards different accents, whereas the green line is those who report exhibiting more prejudice. 

What these results suggest is that there is a a systematic bias against certain accents in England (particularly Southern working-class varieties), whereas RP is evaluated much more positively and is perceived to be the most suitable for professional employment.

However, these results are reported for the general public. Would we see the same types of evaluations amongst those who are responsible for hiring candidates? In the next blog post, we explore this question. In the meantime, you can find our more about the project by visiting the project website

This summary was written by Christian Ilbury

Thursday, 28 November 2019

Accent Bias: Responses to Accent Labels

Continuing our series of posts related to the 'Accent Bias in Britain' project, in this blog post we discuss some findings from our research which investigated current attitudes to accents in Britain.

In the first part of our study, we replicated Coupland & Bishop's study (2007, summarised in an earlier blog post) to see whether the accent attitudes that people held 12 years ago still persist today. A similar study was conducted by Giles in 1970, giving us a further time point to compare our results.

We recruited a sample of over 800 participants aged between 18 and 79 via a market research firm. The group of participants was intended to be a representative sample of the UK population, so was balanced for gender and region (England, Scotland, Wales, and Northern Ireland) and included all major ethnicity groups.

Once participants had been recruited, they were asked to respond to 38 British accent 'labels', such as 'Estuary English', 'Received Pronunciation', 'Multicultural British English', and 'Birmingham English'. You can listen to some of these accents here. The participants were asked to rate each accent label on a scale of 1-7 - where 1 is the lowest and 7 is the highest - for the prestige and pleasantness of the accent.

After they had completed the survey, we collected social information about the participant, including their gender, ethnicity, age, region of origin, highest level of education, occupation, English accent, languages spoken. We also asked them to complete a short questionnaire about their exposure to different UK accents, the diversity of their own social networks, their beliefs about bias in Britain, and respond to a series of questions designed to measure how much they were concerned about being perceived as prejudiced.

As the image above shows, when compared with Giles' results in 1969, Coupland and Bishop's results in 2004, our findings (2019) demonstrate that whilst there are some minor differences, overall, attitudes to accents in the UK remain fairly stable. Standard accents, such as Received Pronunciation (RP) remain very highly rated, whereas ethnic and urban accents, such as Birmingham English, are rated much less favourably. These findings appear to be stable across the three time points.

Want to replicate this study? 
We've developed a series of Language Investigations and Teaching Units that helps students and teachers develop a research project of their own! Head over to Teach Real English! to access these resources.  

However, all is not lost it seems. Although we see similar patterns across the three studies, we do see a gradual improvement in the ratings of the accents that are rated the lowest (Afro-Caribbean, Liverpool, Indian, Birmingham). In fact, our 2019 study reports quite the improvement in overall ratings of these accents. It's therefore possible that people view these accents much more positively than they did 50 years ago.

However, this study examines only responses to 'accent labels'. What would we find if we played actual audio recordings of these accents to participants? Would we see the same results? In the next blog post, we introduce the findings from the second part of our study. In the meantime, you can find our more about the project by visiting the project website. 

This summary was written by Christian Ilbury

Friday, 15 November 2019

Teach Real English!

Did you know that as well as the 'Research Digest', we also maintain the 'Teach Real English!' site?

Our site is an archive of spoken English Language Teaching resources that have been developed by Linguists at Queen Mary University of London. Our website includes: 

  • A database of spoken English (containing sound clips and transcripts)
  • Language Investigations for exploring the English language in every day situations
  • A range of Teaching Units designed to offer secondary school teachers of English language up-to-date examples of English language use
  • Glossaries and descriptions of spoken English features
The materials have been designed for teachers of GCSE and GCE A-Level English Language, but they may be useful for anyone involved in teaching spoken English language.

If you have already used our resource, we'd love to hear your feedback! We are regularly asked to report on usage of the materials we create, so would greatly appreciate you filling in our survey. 

Monday, 11 November 2019

There ain’t nowt wrong with accents

What do you think of when you hear someone speak with a Brummie accent? How about when somebody speaks with a West Country accent? Do you think that some accents are more attractive or prestigious than others? If so, it’s possible that your judgements of these accents are influenced by accent bias.

As part of the Accent Bias project led by academics at Queen Mary University of London and the University of York, over the next few weeks we’ll be uploading a series of Digest posts that discuss the effects of accent bias.

In the first post of the series, we focus on a 2007 study by Nikolas Coupland and Hywel Bishop that investigated how people perceive different types of British accents, looking specifically at whether some accents were evaluated more positively than others.


In their 2007 study, they report on a BBC survey that collected 5010 respondents’ evaluations of 34 different accents. To assess these evaluations, they created an online survey where participants where asked a series of questions about the prestige and pleasantness of the 34 accents. This included asking participants direct questions such as “How much prestige do you think is associated with this accent?”, and “How pleasant do you think this accent sounds?”. The participants rated their judgements electronically via a digital survey by clicking a seven-point rating scale, where 1 is low rating whilst 7 is high rating. This is what is referred to as a 'label study' in that participants were not asked to listen to a recording of the accent, but were simply asked to respond to different accent 'labels', such as 'Asian English' or 'Southern Irish'.

Participants also were asked to indicate where in the UK they were from, how old they were, and their gender. The researchers also asked a series of questions about whether the respondent liked hearing different accents to test whether their attitudes towards accents and dialects influenced their ratings of the different accent labels. 

Coupland and Bishop find that, for social attractiveness, accents such as Standard English, Southern Irish, and Scottish are generally positively evaluated, whilst accents such as Birmingham, South African, and Glasgow were typically down-rated – that is, they score much lower. For prestige, however, they observe a slightly different pattern. Received Pronunciation (or the ‘Queen’s English’) scores much higher in terms of prestige than it does for social attractiveness. Whilst accents such as Birmingham and Asian English score poorly across the two different scales.

Interestingly, accents such as Southern Irish English, Newcastle English and Afro-Caribbean English are rated far higher for attractiveness than for prestige, whereas London English, North American-accented English, South African-accented English and German-accented English are all ranked higher for prestige than for attractiveness.

Whilst these ratings reveal more general trends of the social evaluation of different UK accents, Coupland and Bishop suggest that these evaluations may be influenced by the respondents' social characteristics, such as whether they are male or female. Focusing just on ‘prestige’, on the whole, Coupland and Bishop find that women are more likely to evaluate a given accent as prestigious than men. They also find that where the respondent is based in the UK appears to play a role in their evaluation of a given accent, with participants more likely to evaluate in-group accents as more favourable than others. In other words, Scottish speaking participants were more likely to evaluate Scottish accents more positively than respondents from other parts of the country. Similarly, they also observe that the respondents’ age is likely to influence their evaluation, with the oldest age group tending to show a preference for their own accents than all other groups. Lastly, they observe that the more liberally-minded respondents who indicate that they appreciate accent variation were more likely to rate non-standard accents as more prestigious than their peers. 

So, what does this all mean? Well, Coupland and Bishop note several implications of this study. The first is that language use is influenced by ideology – that is a widespread system of ideas and values that governs a particular concept or social issue. For instance, they observe that there is a general tendency to rate ethnically linked accents (Asian and Afro-Caribbean) and some of the urban vernaculars (Birmingham, Liverpool, Glasgow) as lower in prestige and attractiveness than non-ethnically linked and rural accents. They argue that this is because there is a widespread belief that people should ‘speak properly’ and so accents that are further away from more ‘standard’ varieties are typically perceived to be less attractive and less prestigious than ones that are closer to the standard.

Whilst these findings may not at first seem very encouraging for speakers of non-standard varieties, Coupland and Bishop suggest it seems that there is seems to be a shift towards embracing more liberal attitudes towards accent variation, with younger respondents and those claiming that they like hearing different accents more likely to evaluate non-standard varieties as more prestigious and more socially attractive. So, it seems that although some people might think of certain accents as more attractive or prestigious than others, perceptions are gradually changing. 

Given the fourteen or so years since Coupland and Bishop conducted their study, it’s worth considering whether this liberal outlook has continued. Over the next couple of weeks, we’ll be focusing on the Accent Bias in Britain project which, among other questions, sought to investigate this issue. In the meantime, or more information on the project, you can visit the Accent Bias in Britain homepage. You can also find further educational resources, including a Language Investigation and a Teaching Unit on our Teach Real English! website.


Coupland, Nikolas & Hywel Bishop (2007) Ideologised values for British accents. Journal of Sociolinguistics, 11 (1):74-93.

This summary was written by Christian Ilbury

Thursday, 31 October 2019

‘Oh gurl, you Sassy’

‘Slay’, ‘yaas kween’, ‘squad’ – if you’re a keen social media, you might be familiar with some of these words. Originally from African American Vernacular English (AAVE) – a variety of English spoken by some Black Americans – these terms have quickly become part of the internet grammar. But, how and why have these terms entered our lexicon and what does the use of AAVE in internet communication mean? This and other questions are examined by Christian Ilbury in his recent paper.

Recent sociolinguistic work has often used social media data to examine patterns of written variation – such as whether you spell the word working as <working> or <workin> - in relation to the distribution of the spoken language feature. An example of this is Grieve’s recent paper which we discuss in detail in a previous post. In that paper he uses social media data to explore lexical (i.e., words) variation across different areas of the UK. This work demonstrates the enormous potential of using social media data to explore general patterns of accent variation. However, whilst these approaches appear promising, Ilbury suggests that these analyses often miss a fundamental quality of online interaction: That users often use elements of language that are not part of their own speech for certain purposes, such as to adopt a different identity or signal that the message is humorous.

To investigate this issue, Ilbury turns to tweets from gay men in the UK to examine the ways in which this community use elements of African American Vernacular English. He argues that the gay community in the UK are well suited to examining this phenomenon because aspects of AAVE feature prominently in mainstream gay culture and form much of contemporary gay slang. For instance, drag queens in the UK frequently use aspects of AAVE such as copula absence as in ‘she going’ for ‘she is going’ or the use of completive done as in ‘she done used all the good ones’ in their performance. Turning to Twitter, he extracted 15,804 tweets from the timelines of 10 self-identifying gay men who reside in the UK and trawled through their tweets to identify features that are typically associated with AAVE.

His analysis shows that several features characteristic of AAVE are widespread in the gay men’s tweets. This includes lexical features, including words such as ‘slay’, ‘yaas’, and ‘y’all’; the representation of sound features such as ‘dat’ for ‘that’, ‘ma’ for ‘my, as well as several grammatical features such as copula absence in ‘you nasty’ for ‘you are nasty’ and demonstrative them as in ‘working them boots’.

He argues that the appearance of these features can’t be accounted for by the men trying to represent their own dialect since they are likely to speak a variety of British English that is very different to AAVE. This is in contrast to Grieve’s analysis where the users appear to be representing aspects of their own dialect. This suggests that the men in Ilbury’s study are not attempting to represent their own voices but are rather using elements of AAVE to adopt or perform an altogether different identity.

To investigate what this identity may be, Ilbury looks to popular memes to see how African Americans and AAVE are represented in digital contexts. This includes exploring two memes that reference aspects of AAVE. The first refers to Kimberly ‘Sweet Brown’ Wilkins and the second is entitled the ‘strong independent Black woman who don't need no man’.

'I am a strong independent Black woman who don't need no man' meme (L) &
Kimberly 'Sweet Brown' Wilkins 'Ain't nobody got time for that meme' (R)
He argues that these memes feed into ideological and stereotypical representations of African American women as ‘sassy’. However, this imagery is not new. African American women have frequently been depicted as ‘fierce’ or ‘sassy’, even in very old media representations of this community. These representations are obviously very problematic since they are based on racialised and essentialised ideas about the personal qualities of African American women.

Returning to the Twitter data, Ilbury argues that these representations are helpful in explaining why the men are using features of AAVE. He suggests that it is exactly that this ‘sassy’ meaning that the men are ‘activating’ by using components of AAVE. In other words, the men appropriate aspects of AAVE to perform an identity that is non-local and to evoke the essentialised associations of that style to present themselves as ‘sassy’ – a quality that has become appreciated in mainstream UK gay culture. He argues that they are not attempting to present themselves as ‘Black women’ but are rather using features of AAVE to appropriate the associations of that variety and perform a gay identity that he refers to as the ‘Sassy Queen’ – where ‘Queen’ is a gay slang term that refers to an effeminate gay man.

Such types of language play, Ilbury argues, are particularly useful in contexts where there is some threat that the user may be read as rude or direct, such as disagreements. In these contexts, the use of this style allows the user to avoid the negative outcomes of the disagreement because the receiver is aware that the user is performing a style that is inauthentic. 

So, whilst social media can tell us a lot about dialectal variation (e.g., Grieve – previous post), it is important to acknowledge that some users will appropriate aspects of other linguistic varieties to perform other identities and utilise the meanings associated with that variety. What users do with that style depends on how it is used in interactions and may differ from community to community.


Ilbury, Christian (Online First/2019) “Sassy Queens”: Stylistic orthographic variation in Twitter and the enregisterment of AAVE. Journal of Sociolinguistics.

This summary was written by 
Christian Ilbury

Tuesday, 17 September 2019

You are what you Tweet!

In the time that it takes you to read this article, millions of users will have sent a Snapchat, uploaded an Insta Story and updated their Twitter profile. The age of digital culture is very much upon us. For Linguists, the contemporary networked society offers a way to explore language use beyond the traditional method of recording and interviewing speakers. This includes those studies which examine the dialectal distribution of words and features across different parts of the country. One such paper is Grieve and colleagues’ recent Twitter-based analysis of lexical variation in British English.

Traditionally, linguists interested in researching dialectal variation (i.e., linguistic features specific to a particular geographic region or group) have set about researching this topic by conducting surveys and interviews with speakers of a particular variety. For instance, a linguist might ask someone to name the “a narrow passageway between or behind buildings”. If you’re from the south, you might say ‘alleyway’ but northern speakers might call it a ‘snicket’ or a ‘ginnel’.

With the advent of social media, however, linguists no longer have to elicit these words directly. Rather, they can extract massive datasets of social media data to examine where in the country these words are used most.

In their 2019 paper, Grieve and colleagues used a corpus (i.e., dataset) of 180 million Tweets to examine lexical variation in British English. Helpfully, since tweets include what is known as ‘metadata’ that relates to the location in which the tweet was sent, Grieve and colleagues were able to plot these tweets on maps to identify where these words were most frequent. They compared their analysis with the more traditional approach taken in the BBC Voices project.

Their analysis very convincingly shows that the lexical variation observed in the Twitter data mirrors that identified in more traditional analyses! This finding is shown in the graphic below, where for all of the 8 words, the Twitter maps look comparable to those created for the BBC Voices project. For instance, consider the maps for the word ‘bairn’ – a word that means ‘child’ is typically heard in northern UK dialects (second row, right). The BBC Voices project map and the Twitter map are virtually indistinguishable. Across both maps, this word appears largely confined to the north/north-east of the UK – as expected.

Whilst, for the most part, the traditional dialect maps and the Twitter dialect maps look very similar, Grieve and colleagues note some differences. For instance, in the Twitter dataset, ‘bairn’ is observed to account for a maximum of 7.2% instances of the word ‘child’, even in the areas where it is stereotypically associated with that dialect. This is in comparison to the BBC Voices dataset, which reports a maximum of 100% of instances of ‘bairn’ for ‘child’ in some areas. Discussing the reasons for this difference, Grieve and colleagues explore several possibilities. First, they suggest that the differences may be related to a decline in usage of this word. It is possible that 'bairn' has simply become less popular over time. However, the decline in the use of this word also might have something to do with the type of data we get from Twitter and the way it's analysed in large-scale studies such as this. In particular, the authors note that it is impossible to examine the conversational context of the tweet. A such, it’s possible that’s there’s some contexts where users would use ‘child’ for ‘bairn’ even if they use the dialectal term ‘bairn’ in speech. For instance, if a user is reporting someone else’s speech.

Nevertheless, with these issues aside, Grieve and colleagues’ analysis suggests that the findings observed in large-scale dialectal surveys are largely mirrored in the Twitter data. As such, we can expect more and more sociolinguistic research to examine data from social media sites, such as Twitter in the future! So, it seems, you really are what you tweet!


Grieve, Jack; Chris Montgomery; Andrea Nini; Akira Murakami & Diansheng Guo (2019) Mapping Lexical Dialect Variation in British English Using Twitter. Frontiers in Artificial Intelligence

This summary was written by Christian Ilbury