A short story about AI, refugee claims, and linguistic profiling
I was looking for a case.
Not a legal case in the strict sense; more the kind of small language moment that opens a bigger question. I often scan public controversies, court stories, institutional documents, and everyday exchanges for places where language does more than pass along information.
Sometimes it is obvious: a phrase is ambiguous, a definition is contested, a decision turns on a single word. Other times it is quieter: a change in tone, a telling silence, or a category that appears without being named.
This time I used an AI model (in private mode) to brainstorm possible examples. I was not looking for a final answer, and I was not outsourcing the analysis. I was doing what many writers and researchers now do: using the model as a sparring partner: testing ideas, comparing examples, and seeing which questions were worth pursuing.
We moved from legal linguistics in Vancouver to refugee claims. The chat was still in English, and the topic was straightforward: what kinds of linguistic expertise might matter in a refugee claim, especially where credibility, interpretation, narrative structure, or institutional questioning are involved.
Then something interesting happened.
The model switched to Spanish.
Without warning, the model switched to Spanish.
I had not asked it to. I had not changed languages. The previous messages were in English. But the reply came back in Spanish, as if the topic itself had pushed the system across a linguistic border.
It meant this as a simple correction.
But for me, it was the key part of the exchange.
Because the issue was not just a language glitch. The issue was that the model treated a topic as evidence of who the user must be.
“Refugee claim” became “Spanish speaker.”
Legal vulnerability became a language label.
A context stood in for a person.
And that is where the anecdote stops being anecdotal.
From a language switch to profiling
The unprompted switch to Spanish may sound minor. In a casual chat, you could laugh it off as a harmless glitch. But in discourse analysis, small shifts matter because they still shape how an interaction works.
Choosing a language positions both speakers. It sets expectations. It can signal welcome, distance, authority, familiarity—or an assumption about who the other person is.
Here, the switch was not based on anything I said or did in the conversation. It seemed to come from a statistical association: this topic often goes with that language.
The model appeared to follow a chain like this:
- refugee claim → migrant claimant
- migrant claimant → likely Spanish-speaking
- likely Spanish-speaking → Spanish reply
The problem is not that these links could never be true. The problem is that the model acted on them without asking, checking, or needing to.
That is what makes the moment analytically useful. It shows how talk can slide from context to category—and from category to treatment—before anyone has said who they are.
How assumptions do their work
The issue is not that Spanish is “wrong.” Spanish is not the problem. The problem is assigning Spanish to someone who did not ask for it.
That distinction matters.
In another setting, switching to Spanish could be helpful. It could be a genuine attempt to accommodate someone’s preference.
But accommodation only makes sense if it responds to the actual person in front of you.
When a language shift is based on a guessed identity, not an expressed preference, it becomes something else: a projection.
This is where a small AI moment starts to look like a larger institutional pattern. Institutions run on categories: they sort people into types so they can process cases, forms, risks, and decisions. The danger is when those categories start speaking before the person does.
In each case, the institution may believe it is being reasonable. It may even believe it is being efficient. But discourse analysis asks a different question:
What assumptions had to be in place for this reading to feel natural?
Yes, it’s bias, but what kind?
It is tempting to call the language switch “bias” and stop there. That would not be wrong, but it misses what is most useful to notice: the mechanism.
“Bias” is a helpful public word. People understand it. But it can also be too broad: it can name the problem while hiding how it happened.
Here is the more specific version.
The system treated an index (a hint) as if it were proof.
The topic, refugee claims, was taken as pointing to a likely identity. That guess then shaped the language of the reply.
This can be considered a form of indexical overreaching: treating a hint as enough to act on.
An index is a sign that points toward something: an accent might suggest a region; a term might suggest a professional community; a genre might suggest an institution. But pointing is not proving.
Indexical overreach happens when we push that pointing too far, when we treat what “often goes with” something as if it has already been confirmed in this interaction. It is the difference between noticing a possible clue and acting as if the clue settles the question. That difference can be small in form and large in consequence.
Why it matters in legal and institutional settings
In everyday conversation, an unwanted language switch is mostly just awkward. In legal and institutional settings, the same kind of assumption can carry real consequences.
Think of a refugee hearing, a police interview, an immigration form, a workplace investigation, or an administrative decision. In these settings, people are not just speaking; they are being assessed.
Their words can be judged for credibility, consistency, precision, cooperation, emotional “fit,” or plausibility.
That means that the frame of interpretation matters.
If an institution meets someone through the wrong category, everything that follows can shift. A story can sound evasive because it does not follow the expected order. A pause can be read as uncertainty. A culturally shaped way of explaining can be dismissed as irrelevant. A translation can be treated as the speaker’s own phrasing.
None of these problems requires open hostility.
That is precisely why they are difficult to detect.
Institutional bias often works this way: not through open hostility, but through reasonable-looking shortcuts. A shortcut becomes a habit. A habit becomes an expectation. An expectation becomes a standard, and the standard starts to look neutral.
This is one reason linguistic analysis matters.
It slows things down.
It asks how a conclusion was reached. It separates what was said from what was inferred. And it shows where language, category, and institutional expectation have been welded together.
The AI didn’t invent the problem
It would be easy to make this story “about AI.” But AI did not invent this problem. It just made it easier to see.
Human institutions have long done similar things: they map speech to identity, identity to credibility, and credibility to outcomes.
The AI simply compressed that chain into one visible moment.
It saw a topic and selected a language.
That is why the anecdote matters: not because the machine behaved strangely, but because it behaved in a familiar institutional way: fast, patterned, and rarely questioned.
The point is not to reject AI as a tool. I was using it myself, on purpose, as part of brainstorming. The point is to see clearly what kind of tool it is.
AI can help you explore: it can generate contrasts, list possibilities, and narrow a question. But it also repeats learned patterns—associations and assumptions included.
For a linguist, that is not only a limitation; it is data.
The question behind the switch
The most important question is not only “why did the model switch languages?” A better question is “what made that switch seem appropriate?”
That is also the question we should ask in institutional discourse.
- What made an officer’s question seem neutral?
- What made a claimant’s answer seem inconsistent?
- What made a translation seem equivalent?
- What made a category seem obvious?
- What made an interpretation feel natural before it was tested?
Discourse analysis begins right there, not with the dramatic statement, but with the ordinary move that goes unnoticed.
- A pronoun
- A silence
- A sequence
- A translation choice
- A question format
- A shift in language
Small features can expose large assumptions.
Conclusion: when language decides too soon
The AI’s switch to Spanish was a small moment. No one was harmed. No legal outcome depended on it. No one’s credibility was on the line. But as an example, it matters. It shows how quickly language can decide who someone is, before the person has said so.
It shows how a context can become a category, how a category can become a treatment.
And it is a reminder that communication is not only about what is said. It is also about the assumptions that shape what can be heard.
In legal and institutional settings, those assumptions deserve scrutiny.
Because sometimes the problem is not that someone misunderstood the words.
Sometimes the problem is not that someone misunderstood the words. Sometimes the problem is that language assumed too much before the person had a chance to speak.
