Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

What Work is “The Middle East” Doing Here?

Provocations from the January 20 Campus Walk

You walked the campus. You found the Middle East – in department names, on bulletin boards, in conversations, in the library classification system, in course catalogues.

Good. Now let’s think about what you actually found.


Provocation 1: What Work is “The Middle East” Doing as a Category?

You identified something as “Middle Eastern” on campus. A department. An institute. A poster. A person. A language. A book classification.

But what is that category organizing?

Think carefully about this. When an institution creates a department called “Institute for Middle Eastern Studies” or shelves books under “Middle East – Politics,” it’s not just describing reality. It’s organizing reality.

So what’s being organized?

Is it organizing knowledge? (This is what we know about this region)

Is it organizing people? (These are the people who study/come from this region)

Is it organizing funding? (This is where research money for this region goes)

Is it organizing expertise? (These are the people authorized to speak about this region)

Is it organizing danger? (This is the region we need to understand for security purposes)

Is it organizing otherness? (This is what’s different from us)

Here’s the uncomfortable part: All of these are happening simultaneously. The category does multiple kinds of work, often contradictory work.

Questions to sit with:

  • When you saw “Middle East” on a department sign, what did that sign authorize? Who gets to speak? Who gets listened to? Who gets funding? Who gets jobs?
  • When you saw books classified under “Middle East,” what was being produced? Regional knowledge? Strategic intelligence? Academic careers? Or the region itself as an object that can be known?
  • When you identified a student as “Middle Eastern,” what were you organizing? Their identity? Your attention? The diversity statistics of the university? Their foreignness?

What if the category “Middle East” at this university exists primarily to manage – to contain, to study, to make legible, to control – rather than to understand?

What if “Middle Eastern Studies” is less about taking the region seriously and more about producing the kind of knowledge that makes the region knowable to Western institutions?

How would you tell the difference?


Provocation 2: Who Decided What Counts as “Middle Eastern”?

You found Arabic script. You heard languages. You saw posters for events. You identified people. You located institutes.

But who created the boundary that says “this is Middle Eastern” and “this is not”?

Let’s get specific:

On language:

  • You heard someone speaking Arabic in the cafeteria. Did they identify themselves as “Middle Eastern” or did you assign that category to them?
  • If you heard someone speaking German with an accent, did you classify that as “Middle Eastern presence”? Why or why not?
  • What about Turkish? Is Turkey in “the Middle East” or not? Who decided?

On geography:

  • The library has a “Middle East” section. Does it include North Africa? Turkey? Afghanistan? Iran? Central Asia? Who decided where the boundaries are?
  • Are there books about Egypt in both “Middle East” and “Africa” sections? What does that reveal about how regions get constructed?

On identity:

  • You saw someone you identified as Middle Eastern. What visual markers did you use? Skin color? Clothing? Language? How do you know you were right?
  • What if that person identifies primarily as German? As Berliner? As a student at FU? What makes “Middle Eastern” the salient category for you?

On knowledge:

  • The university has institutes that study “the Middle East.” Do those institutes include scholars who are from the region? Or is “the Middle East” something that gets studied about rather than studied from?
  • Whose knowledge counts as knowledge about the Middle East? Academic knowledge? Embodied knowledge? Lived experience?

Even more:

  • Is “the Middle East” a category that people from the region use to describe themselves? Or is it a category imposed from outside?
  • When you use “Middle Eastern” as a descriptor, are you recognizing someone’s identity or are you creating their identity as foreign, as other, as from-elsewhere?
  • What if the people you’re identifying as “Middle Eastern” don’t use that category for themselves at all? What if they’re Moroccan, Egyptian, Lebanese, Palestinian, Syrian – and lumping them all together as “Middle Eastern” erases the specificity of their actual locations, histories, languages?
  • What happens when a category designed by colonial powers (yes, “Middle East” is a British imperial invention) becomes the organizing principle for how a German university structures knowledge?

Who benefits from “the Middle East” existing as a coherent category?

Is it the people from those regions – or is it the institutions that fund research, departments, area studies programs, security studies, strategic intelligence?


Provocation 3: What’s the Difference Between Studying and Containing?

FU has multiple institutes, departments, and programs focused on the Middle East. That’s significant institutional resources. That could mean:

Possibility 1: This university takes the region seriously as a site of knowledge production, history, culture, and thought worthy of sustained scholarly attention.

Possibility 2: This university needs to understand the region for strategic, political, or security purposes – to manage it, to predict it, to contain potential threats.

How do you tell the difference?

Look at what you found today:

Funding sources:

  • Where does the money come from for Middle Eastern studies at FU?
  • Are there connections to foreign policy institutes? Security studies? Government funding tied to “understanding the region”?
  • Or is it primarily humanities-based, focused on languages, literatures, histories for their own sake?

What gets studied:

  • When you looked at course offerings, what kinds of courses exist?
  • Are they primarily about conflict, terrorism, political instability, migration “problems”?
  • Or are they about poetry, philosophy, art, everyday life, intellectual history?
  • What does the balance reveal about what the university thinks is important to know about the region?

Who studies what:

  • Are Middle Eastern languages taught primarily to students interested in the region’s culture and history?
  • Or are they taught to students in political science, security studies, international relations who need to “understand the region” for policy purposes?
  • Who’s in those classrooms? What are they planning to do with that knowledge?

Where it’s located:

  • Are Middle Eastern studies programs integrated into the main campus or isolated?
  • Are they well-funded or scraping by?
  • Do they have institutional prestige or are they marginalized?

Even more:

  • Can you study a region without participating in its domination?
  • Is all knowledge production about the Middle East implicated in geopolitical power, or can scholarship be genuinely separate from policy interests?
  • When FU trains students in Arabic, Persian, Turkish – are they training cultural mediators or are they training future diplomats, intelligence analysts, NGO workers who will “manage” the region?

Edward Said wrote about Orientalism – the way Western scholarship created “the Orient” as an object of knowledge that could be studied, known, managed, controlled. The “Middle East” is Orientalism’s successor category.

So: Is Middle Eastern Studies at FU continuing that tradition, or breaking from it?

How would you know? What evidence did you see today? What evidence would you need to see?


Provocation 4: What If You Stopped Looking for “The Middle East” Altogether?

You’ve spent months walking Berlin looking for Middle Eastern presence. You’ve gotten good at recognizing it in a way – the signs, the languages, the shops, the restaurants, the people.

But what if that very skill – the ability to identify something as “Middle Eastern” – is a problem?

Think about what happens when you categorize:

On people:

  • You saw someone in the library and thought “that’s a Middle Eastern student.”
  • What made you think that? Their appearance? Their language? Their name?
  • But what if that person is just… a student at FU? What if they were born in Berlin, speak German as a first language, identify primarily as German?
  • By categorizing them as “Middle Eastern,” are you recognizing their identity – or are you refusing to let them be German? Are you marking them as perpetually foreign?

On spaces:

  • You identified certain campus spaces as “Middle Eastern spaces” – where Arabic is spoken, where certain student groups meet.
  • But what if those are just… campus spaces? Places where students gather, study, eat, talk?
  • By marking them as “Middle Eastern,” are you describing neutral fact – or are you othering those spaces, marking them as different from “normal” campus spaces?

On knowledge:

  • You found books classified as “Middle East – History” or “Middle East – Politics.”
  • But those books are about specific places (if you accept national or cultural borders) – Egypt, Lebanon, Palestine, Iran. Why are they all grouped under one regional category?
  • What gets lost when Egyptian history gets classified as “Middle Eastern history” instead of just “history”?
  • Why is there a “Middle East” section but no “Western Europe” section? Why is one region marked and the other unmarked as universal?

The uncomfortable questions:

  • Every time you identify something as “Middle Eastern,” you’re drawing a boundary between “Middle Eastern” and “not Middle Eastern” (which usually means “German” or “European” or “Western” or just unmarked as normal).
  • What if drawing that boundary – even with good intentions, even to recognize diversity, even to celebrate presence – is what keeps “Middle Eastern” as a category of otherness?
  • What if every time you say “I saw Middle Eastern presence at FU,” you’re actually saying “I saw something that doesn’t belong to the normal category of FU”?

Could this entire course be reproducing the problem it’s trying to examine?

You’ve been trained all semester to see “the Middle East in Berlin.” You’ve gotten very good at it. You can identify Middle Eastern restaurants, Middle Eastern neighborhoods, Middle Eastern cultural practices.

But what if that entire project – looking for the Middle East as if it’s a distinct, identifiable thing – is exactly what keeps people, languages, practices marked as foreign, as other, as not-quite-German?

What if the moment you stop looking for “Middle Eastern presence” and just start seeing people living in Berlin, students studying at FU, languages spoken in the city, shops selling things – that’s the moment the category loses its power?

But then what happens to your research? What happens to this course? What happens to Middle Eastern Studies as a field?

If the category is the problem, what’s the alternative?


Provocation 5: So What Now?

These provocations are deliberately uncomfortable. They’re designed to make you question the entire premise of what you’ve been doing.

But discomfort without direction is just paralysis. So: what now?

Some possibilities to consider:

Option 1: Reject the category entirely Stop using “Middle East” as a descriptor. Talk about specific places, specific languages, specific histories. Don’t lump Morocco and Iran together just because Western institutions decided they belong to the same region.

But: Does this erase the real connections, histories, and solidarities that exist across those regions? Does it ignore how people from those regions do sometimes organize collectively in diaspora?

Option 2: Use the category strategically Recognize that “Middle East” is a constructed, problematic category – but also recognize that it has material effects. Departments exist. Funding exists. Communities organize around it.

Maybe the point isn’t to reject the category but to be conscious of what work it’s doing and when it’s useful vs. when it’s harmful.

But: Can you use an oppressive category strategically without reinforcing it? Or does every use of the category, even critical use, make it stronger?

Option 3: Complicate the category constantly Every time you use “Middle Eastern,” immediately problematize it. Ask: whose Middle East? which parts? defined by whom? for what purposes?

Make the category unstable. Refuse to let it settle into common sense.

But: Does constant problematization become its own kind of academic performance? Does it actually change anything or just make you feel better about using a bad category?

Option 4: Focus on power, not categories Stop asking “what is Middle Eastern” and start asking “who has power to define, to include, to exclude, to represent?”

Make the analysis about institutional power, not about whether categories are accurate.

But: Can you analyze power without using any categories at all? Don’t you need some language to describe who’s being marginalized and who’s doing the marginalizing?


There are no clean answers here.

Every option has problems. Every position is compromised.

That’s the point.

The question isn’t “what’s the right answer” but “what are you going to do with the discomfort?”

Are you going to:

  • Keep using the category unconsciously?
  • Refuse the category entirely?
  • Use it strategically while acknowledging its problems?
  • Spend all your time problematizing it?
  • Find a different framework altogether?

TASK: What you do next week:

Take one observation from today’s campus walk.

Write it two ways:

Version 1: Using the category “Middle East” / “Middle Eastern” / “Arab” / regional or ethnic labels

Version 2: Without using any of those categories – describe what you saw using only specific, concrete details

Bring both versions to the next session on Feb 10. We’ll see what changes.

We’ll see what becomes visible when you remove the category.

We’ll see what gets lost.

We’ll see what the category was actually doing.

And then we’ll decide what to do with that knowledge.

When Frameworks Fail

November 19, 2025 | 10:15-12:45

This session of Beyond Human Language focused on a fundamental question: what happens when the frameworks we use to understand the world prove inadequate to what’s actually there? Using Zoë Schlanger’s The Light Eaters as a lens, we explored how knowledge gets made, who decides what counts, and what we do when we encounter something our training can’t explain.


Opening: The 2012 Problem

We began with a passage from the book:

“In 2012, a group of scientists gathered at the University of Cambridge to formally confer consciousness on all mammals, birds, and ‘many other creatures, including octopuses.’ Nonhuman animals had all the physical markers of conscious states, and clearly acted with a sense of intention.”

The provocation: The internet is older than scientific consensus on animal consciousness.

What emerged in discussion:

We were struck by several things:

  • Scientists had been studying these animals for decades but couldn’t SAY they were conscious until 2012
  • The animals didn’t need the declaration—humans did
  • The declaration was permission: permission to see what was always there
  • Language matters: before 2012, scientists used terms like “behavioral flexibility” instead of “consciousness”

Key questions that surfaced:

  • What took so long? (Answer: not evidence—frameworks, careers, institutional conservatism)
  • Who was this declaration for? (Answer: scientists themselves, who needed permission to say what they’d been seeing)
  • Does recognition change reality? (Answer: not for animals, but it changes everything for how humans relate to them)

The connection to Middle East studies: We identified parallel moments where something was obvious but couldn’t be said until frameworks changed—Orientalism, Palestinian history, women’s agency in Islamic contexts.


Small Group Discussions: Three Problems

Students divided into two out of three groups, each working with a different passage from the book.

Group 1: The Parenthesis Problem

Passage: Ibn al-Nafis (Arab physician, Damascus, 13th century) accurately described pulmonary circulation 300 years before William Harvey. Harvey is in every textbook. Ibn al-Nafis is in a parenthesis.

What they discussed:

  • Why “first European” is a meaningful category (Answer: it’s not—it’s a linguistic trick to avoid saying “second”)
  • How colonial power determined whose knowledge counted as universal
  • Whether the pattern is fixable or structural
  • Examples from their own fields: Ibn Khaldun as “precursor” not “founder” of sociology; Islamic Golden Age as “preservers” not “innovators”

The provocation: What would your field look like if non-European knowledge wasn’t in parentheses?

Group 2: The Definition Trap

Passage: Scientists arguing against plant intelligence were using human-centric definitions of intelligence to prove plants aren’t intelligent—circular reasoning.

What could have been discussed:

  • How definitions exclude by design
  • Examples: “literature” excluding oral traditions; “history” requiring written records; “philosophy” starting with Greeks
  • Who benefits from narrow definitions (Answer: those who already fit them)
  • The political stakes of definitional battles (“terrorism” vs “resistance”)

The provocation: What if “intelligence” was defined by plants? What would count?

Group 3: The Decision Problem

Passage: Seeds have a “decision-making center” that integrates information and decides when to emerge. McClintock called it “the knowledge the cell has of itself.”

What they discussed:

  • Whether “decision” is metaphor or description
  • What we’d have to accept if it’s not metaphor (that decision-making doesn’t require brains)
  • What else might “decide” (rivers, immune systems, algorithms, ecosystems)
  • Whether there’s a politics to who/what we grant decision-making capacity

The provocation: What if decision-making is everywhere, and consciousness is just the version we recognize?

Cross-group connections:

  • The definitions (Group 2) are made by people whose knowledge already counts (Group 1)
  • If we can’t grant decision-making to seeds (Group 3), how do we grant it to non-Western knowledge systems?

Pair Work: Your Field’s Blind Spots

Students paired with someone from a different discipline to discuss:

  1. What’s one thing your field systematically cannot see—not because it’s hidden, but because of how the field is structured?
  2. What would have to change for your field to see it?
  3. Is there a version of the Ibn al-Nafis problem in your field—knowledge that exists elsewhere but isn’t recognized?

What emerged:

  • Fields are structured to miss what doesn’t fit their methods
  • Knowledge in the “wrong” language becomes invisible
  • Academic credentialing determines what counts as knowledge
  • The most interesting insights often come from people without PhDs

Closing: The Book and What’s Next

The assignment:

Read The Light Eaters before January 27. But don’t read it as a book about plants. Read it asking:

  • Where do I see this pattern in my own field?
  • What am I trained not to see?
  • Whose knowledge am I ignoring without realizing it?

The question for next session: What in this book made you most uncomfortable?

Not what you learned. Not what interested you. What disturbed an assumption. What made you question something you thought you knew.


Key Takeaways

  1. Recognition is political: Scientific “truth” depends on who has permission to say what. The 2012 animal consciousness declaration didn’t change reality—it changed permission.
  2. Frameworks filter reality: We don’t see everything that’s there. We see what our frameworks allow us to see. The rest is invisible—not absent, just unnameable.
  3. Definitions are power: Who gets to define “intelligence,” “knowledge,” “consciousness,” “decision-making” determines what counts as real and what gets excluded.
  4. The parenthesis problem is everywhere: Whose knowledge ends up in the main text and whose ends up in footnotes reveals the structure of power in knowledge production.
  5. You’ve already seen it: The thing you can’t name in your research—the pattern you noticed but couldn’t prove, the insight that didn’t fit—might be the most important thing. The question is whether you’ll find language for it or let your framework filter it out.
  6. This is happening now: In 2050, students will read about our moment and wonder why we couldn’t see what was obvious. The question is: what is it? And do we have the courage to name it?

What Counts as Intelligence? Notes from Our First Session

We began today (Oct 28) with a seemingly simple question:

What is intelligence?

The answers came quickly: the ability to think, to reflect, to learn. Processing and retaining information. Pattern recognition. Learning from empirical evidence. Problem-solving. Memory. Agency. Self-awareness. Emotional intelligence. Even artificial intelligence came up—that strange mirror we hold up to ourselves, trying to figure out what makes us “us.”

Someone said “mind.” Someone else said “desire versus self-awareness,” which sparked the thought of whether you need to be conscious of your desires to be intelligent, or whether desire itself is a form of intelligence.

What struck me was how much of what we named assumes a human model. A brain. A nervous system. Individual consciousness. Language. The ability to measure and be measured (someone mentioned IQ, and we paused on that—intelligence as something quantifiable, testable, rankable).

But then the conversation shifted when I asked: What if intelligence doesn’t look like this at all?

The Hierarchies We Don’t See

In small groups, I asked students to think about forms of knowledge (or intelligence or experience) that get dismissed because they don’t fit Western frameworks. The examples that emerged were striking:

Oral traditions preserve and transmit knowledge across generations through storytelling and embodied practice—but they get labeled “folklore” instead of “knowledge systems.” Writing, apparently, makes knowledge more legitimate. The medium determines the status.

Experiential knowledge—the kind built over generations through practice and observation—gets dismissed when researchers arrive demanding “data” and “studies.” “We’ve done this for generations and it works” doesn’t count the same way as knowledge produced through specific scientific protocols, even when it’s equally accurate.

I raised the question of household care work as intelligence—the complex, adaptive, multi-tasking labor that gets devalued because it’s associated with women and the domestic sphere. Someone talked about regional intelligence—how certain ways of knowing get marked as provincial, unsophisticated, tied to place rather than universal.

I called into thought thinking about children, too. About how their ways of knowing and learning get dismissed as “not yet fully developed” rather than recognized as legitimately different.

The pattern became clear: there’s a hierarchy. At the top maybe: Western, written, scientific, rational, individual, male, urban, measurable. Everything else gets pushed to the margins—mystical, traditional, folkloric, emotional, collective, feminine, regional, immeasurable.

Plants at the Bottom of the Hierarchy

If oral traditions get dismissed because they’re not written, plants are even further down the hierarchy—they don’t have language at all. Or have they? They communicate through chemicals, through slow growth patterns, through means we can barely perceive because they operate on timescales that aren’t human.

For decades, botanists who talked about plant “behavior” or plant “intelligence” risked their careers. It wasn’t considered real science. It sounded mystical, unserious, like you were anthropomorphizing—projecting human qualities onto non-human beings.

This is whatt Zoë Schlanger’s The Light Eaters is going to confront us with: What if the problem isn’t that scientists were anthropomorphizing plants? What if the problem is that our frameworks for understanding intelligence were too narrow to hold what plants—and other forms of life—actually are?

Pea seedlings can hear water flowing underground and grow toward it. Tomato plants being eaten by caterpillars can fill their leaves with chemicals so bitter the caterpillars start eating each other instead. Plants remember. They make decisions. They communicate with each other through vast underground networks.

These aren’t metaphors. This is what plants do.

And scientists are now stuck. They can’t use their normal language—”the plant decides,” “the plant communicates”—because that sounds anthropomorphic. But passive language—”the plant is affected by,” “a chemical process occurs”—doesn’t capture what’s actually happening either.

They need new language. New concepts. New frameworks.

Which means admitting their old frameworks were limited all along.

The Translation Move

It happens all the time: a researcher encounters a concept from another tradition and says, “Oh, what you call X is really just what we call Y in psychology/biology/philosophy.” As if the (often) “Western” or “civilized” or “educated” term is the truth and the other term is just a confused or poetic version of it.

It’s a power move. It assumes one framework is universal and all others are local. It assumes one way of knowing is objective and all others are cultural. It assumes that if we can translate something into “Western” terms, we’ve understood it—when really, we might have just flattened it into something our frameworks can hold.

This is exactly what’s been happening with plants. For centuries, we’ve insisted they must fit into our categories: alive but not conscious, responsive but not intelligent, reactive but not agentic.

And now those categories are breaking.

What Else Have We Been Wrong About?

This is why I think The Light Eaters matters beyond botany. It’s a case study in what happens when your frameworks can’t hold what you’re encountering.

If we’ve been wrong about plant intelligence—not because plants weren’t intelligent, but because our definition of intelligence was too narrow—what else have we been wrong about?

What other forms of intelligence have we dismissed? What other ways of knowing have we marginalized? Whose voices have we failed to hear because we don’t have the language to listen?

The question isn’t just about plants. It’s about the hierarchies we construct and defend, often without realizing it. It’s about who gets to decide what counts as knowledge, intelligence, consciousness, agency.

It’s about power masquerading as objectivity.

A Provocation for the Weeks Ahead

As you read The Light Eaters over the next six weeks, I want you to sit with this discomfort:

What if everything you think you know about intelligence is based on a framework that was designed to exclude certain forms of knowing?

Not to discard your knowledge—but to recognize its limits. To see how your frameworks shape what you can and cannot see.

Pay attention to the moments when the book makes you defensive. When you want to say “but that’s not really intelligence” or “but that’s just a chemical process” or “but they don’t have consciousness like we do.”

Those moments of resistance? That’s where the thinking starts.

And here’s the second provocation, the one that connects to everything we discussed today:

How might recognizing plant intelligence change how you think about other forms of knowledge that get dismissed as “not really” intelligent, scientific, or authoritative?

If we can recognize that plants have been intelligent all along and we just couldn’t see it, what else have we failed to see? What other hierarchies might need to collapse?

Until November

We won’t meet again until November 25th. Six weeks to live with this book. Six weeks to let it unsettle you.

Don’t rush it. Don’t try to “master” it or take notes for a test. Just read it like you’d read something that might genuinely change how you see the world.


Next session: November 25, 10am-2pm Question: “What in this book made you most uncomfortable? What assumption about intelligence, communication, or agency did it disturb?”

Die Keynote-Maschinerie: Warum wir alle in diesem Theater mitspielen

I. Wie es sich anfühlt, einer Keynote beizuwohnen

Der Körper spürt es zuerst, bevor der Verstand es begreift. Da ist diese eigentümliche Atmosphäre, die sich über den Raum legt, sobald die Keynote angekündigt wird. Die Gespräche verstummen anders als bei normalen Vorträgen – nicht abrupt, sondern mit einer seltsamen Ehrfurcht, als würde gleich ein Priester die Kanzel besteigen. Die Menschen strömen herein, auch die, die sonst in den Parallelveranstaltungen verschwinden. Plötzlich ist der große Saal voll, während nebenan brillante Nachwuchsforscher:innen vor drei Zuhörern sprechen. Es ist, als würde sich die gesamte Konferenz in diesem Moment neu konfigurieren, als würde die Keynote alle anderen Ereignisse zu Vorspeisen degradieren. Die Körper richten sich anders aus, die Notebooks werden aufgeklappt, die Stifte gezückt – eine kollektive Bereitschaft zur Aufnahme von etwas Bedeutsamen, noch bevor ein Wort gesprochen wurde.

Dann betritt die Person die Bühne. Meist ist es wirklich eine Bühne, erhöht, beleuchtet, manchmal sogar mit einem Rednerpult, das an Wahlkampfauftritte erinnert. Der Chair – immer gibt es einen Chair – verliest einen CV, der länger dauert als manche Konferenzbeiträge. Jede Station wird zelebriert: Harvard, Oxford, die wichtigsten Preise, die einflussreichsten Bücher. Es ist ein Ritual der Legitimation, eine Beschwörung von Autorität durch bloße Akkumulation von Prestige. Das Publikum lauscht dieser biografischen Litanei mit einer Mischung aus Ehrfurcht und Neid. Man spürt förmlich, wie der Raum sich mit einer Erwartungshaltung auflädt, die unmöglich erfüllt werden kann. Denn was sollte jemand sagen können, der all diese Auszeichnungen verdient? Das Paradox ist perfekt: Je höher die Erwartung, desto banaler wirkt das, was folgt.

Der Vortrag selbst entfaltet dann eine eigentümliche Zeitlichkeit. Die ersten zehn Minuten lauscht man noch aufmerksam, bereit, das große Neue zu empfangen. Dann setzt eine merkwürdige Doppelwahrnehmung ein: Man hört zu und hört gleichzeitig auf zu hören. Die Worte fließen vorbei wie Wasser, ohne wirklich zu berühren. Es ist nicht so, dass der Inhalt schlecht wäre – oft ist er das nicht. Aber die Form, die Art der Darbietung, die Unmöglichkeit der Unterbrechung schaffen eine Distanz zwischen Sprecher und Publikum, die jede echte Kommunikation verhindert. Man wird zum passiven Empfänger einer Wahrheit, die bereits feststeht, bereits publiziert ist, bereits kanonisiert wurde. Der Raum wird zu einem Museum, in dem ausgestellt wird, was man anderswo besser nachlesen könnte.

II. Was die Keynote eigentlich macht (und warum das problematisch ist)

Aber warum gibt es Keynotes überhaupt? Die offizielle Antwort lautet: Sie sollen Orientierung bieten, Überblick schaffen, die großen Linien einer Disziplin sichtbar machen. Das ist die pädagogische Legitimation. Die ökonomische Legitimation ist subtiler: Keynotes machen Konferenzen vermarktbar. Sie sind das Lockmittel, der “draw”, der Teilnehmer:innen anzieht und Sponsoren beeindruckt. Ein großer Name im Programm rechtfertigt höhere Teilnahmegebühren, macht Förderanträge erfolgreicher, verleiht dem ganzen Event kulturelles Kapital. Die Keynote ist also nicht nur wissenschaftliches, sondern auch Marketinginstrument. Sie verwandelt Konferenzen in Entertainment, Wissenschaftler in Konsumenten und Erkenntnissuche in Spektakel.

Aber es gibt auch anthropologische Gründe: Keynotes existieren, weil die akademische Welt trotz aller Aufklärungsrhetorik zutiefst religiös strukturiert ist. Sie braucht ihre Prophet:innen, ihre Hohepriester:innen, ihre Heiligen Schriften. Die Keynote ist die säkularisierte Predigt einer Wissenschaftsreligion, die nicht zugeben will, dass sie eine ist. Der erhöhte Sprecher, das schweigende Publikum, die rituelle Einführung durch den Chair – all das reproduziert liturgische Strukturen in vermeintlich rationalen Kontexten. Wir applaudieren nach Keynotes nicht unbedingt, weil sie besonders erkenntnisreich waren, sondern weil wir einer Messe beigewohnt haben, einem Akt der kollektiven Vergewisserung über die Bedeutsamkeit unseres Tuns.

Die Keynote ist letztlich ein perfektes Beispiel für das, was Jacques Derrida als “Präsenzmetaphysik” kritisiert hat: den Glauben, dass Wahrheit durch die unmittelbare Anwesenheit einer Autorität übertragen werden kann. Die Keynote-Speaker:innen sind die Verkörperung dieser Metaphysik – seine physische Präsenz soll seinen Ideen ein Gewicht verleihen, das sie durch bloße Lektüre nicht hätten. Aber diese Präsenz ist illusorisch. Was wir erleben, ist nicht die unmittelbare Übertragung von Erkenntnis, sondern die Inszenierung von Autorität. Die Sprecher:innen wiedeholen meist nur, was sie bereits geschrieben haben, aber die Aura der live performance soll diesem bereits Bekannten eine neue Würde verleihen. Es ist ein Taschenspielertrick: Die Form suggeriert Unmittelbarkeit, während der Inhalt pure Vermittlung ist. Die Keynote dekonstruiert sich selbst, indem sie das Gegenteil dessen produziert, was sie verspricht. Statt lebendiger Kommunikation entsteht eine Simulation von Kommunikation, statt Dialog ein verkappter Monolog, statt Erkenntnis die Performance von Erkenntnis.

Besonders aufschlussreich ist die zeitliche Struktur der Keynote-Ökonomie. Sie beruht auf einer seltamen Temporalität: präsentiert wird als “aktuell” und “zukunftsweisend”, was strukturell bereits vergangen ist. Keynote-Speaker:innen werden eingeladen für Arbeiten, die sie vor Jahren geleistet haben – die Zeit, die sie brauchten, um keynote-würdig zu werden, macht ihre Forschung zwangsläufig historisch. Die “cutting-edge”-Rhetorik der Ankündigungen verschleiert diese grundlegende Anachronie. Was als Zukunft des Denkens verkauft wird, ist seine institutionalisierte Vergangenheit. Die Keynote ist somit ein Mechanismus der Kanonisierung – sie transformiert lebendige Forschung in tote Tradition, macht aus Prozessen Produkte, aus Fragen Antworten. Sie ist das Gegenteil dessen, was Wissenschaft eigentlich ausmacht: die offene Suche nach Erkenntnis wird ersetzt durch die authoritative Verkündung bereits gefundener Wahrheiten.

Die räumliche Anordnung der Keynote reproduziert dann auch vormoderne Machtverhältnisse in einem postmodernen Setting. Die erhöhten Sprecher:innen, das stumme Publikum, die Unmöglichkeit der Unterbrechung – all das erinnert mehr an eine Predigt als an wissenschaftlichen Diskurs. Diese Theatralität ist nicht zufällig, sondern strukturell notwendig für die Funktionsweise des akademischen Starsystems. Die Keynote braucht die Inszenierung von Ausnahme, um ihre eigene Berechtigung zu demonstrieren. Ohne die rituelle Überhöhung würde sichtbar werden, was meist der Fall ist: dass der Unterschied zwischen Keynote und normalem Konferenzbeitrag marginal ist. Die Form muss die Differenz erzeugen, die inhaltlich oft nicht vorhanden ist. Die Keynote ist also weniger ein epistemisches denn ein soziales Phänomen – sie dient nicht der Wissensproduktion, sondern der Reproduktion akademischer Hierarchien. Sie ist ein Distinktionsmechanismus, der Unterschiede schafft, wo keine sind, und Grenzen zieht, wo Durchlässigkeit produktiv wäre.

III. Wie Wissenschaft anders funktionieren könnte

So etwas wie “Polyphonie” könnte die Keynote ersetzen: Vier Forscher:innen unterschiedlicher Generationen und Disziplinen erhalten zwei Wochen vorher jeweils einen zweiseitigen Text der anderen – keine fertigen Aufsätze, sondern Problemskizzen, offene Fragen, vorläufige Thesen. Sie bereiten keine Statements vor, sondern kommen mit Reaktionen. Neunzig Minuten moderierter Dialog vor dem Plenum, bei dem sie sich direkt aufeinander beziehen müssen. Das Publikum kann vielleicht sogar über eine App live Fragen einreichen, die alle fünfzehn Minuten eingestreut werden. Praktisch bedeutet das: echte Überraschungen, weil niemand weiß, was die anderen sagen werden. Intellektuelle Reibung, weil Widerspruch erwünscht ist. Und ein Protokoll, das nicht “Ergebnisse” festhält, sondern offene Fragen für weitere Diskussionen. Der Zeitaufwand ist geringer als bei einer Keynote-Vorbereitung, der Erkenntnisgewinn aber höher, weil neue Gedanken in Echtzeit entstehen können.

“Open Source Research Sessions” funktionieren noch radikaler: Alle Teilnehmer:innen erhalten eine Woche vorher dasselbe Material – einen unbekannten Primärtext, widersprüchliche Forschungsdaten oder ein ungelöstes theoretisches Problem. Namen werden durch Nummern ersetzt, akademische Titel sind irrelevant. Neunzig Minuten Kreisgespräch ohne festgelegte Rollen. Ein Timekeeper rotiert alle zwanzig Minuten, ein Protokollant alle zehn. Niemand darf länger als drei Minuten am Stück sprechen. Die Session endet nicht mit Schlussfolgerungen, sondern mit einer Liste offener Fragen, die online zugänglich gemacht wird. Das Protokoll gehört allen Teilnehmer:innen gemeinsam und kann von jeder Folgekonferenz weiterbearbeitet werden. Praktisch heißt das: Die brillante Masterstudentin kann den Emeritus unterbrechen. Die verschüchterte Postdoc kann radikale Thesen wagen, weil sie anonym bleibt. Und die Ergebnisse können nicht für individuelle Karrieren instrumentalisiert werden.

Ein drittes Format wäre die simple “Arbeitssession” – häufig als Workshop als Alternative zu Konferenzen verstanden: Dreißig Teilnehmer:innen, ein konkretes Problem, drei Stunden intensive Gruppenarbeit. Beispiel: “Wie können Digital Humanities demokratischer werden?” Die Gruppe teilt sich in fünf Arbeitskreise auf: Finanzierung, Methoden, Institutionen, Ausbildung, Öffentlichkeit. Jeder Kreis hat eine Stunde, um Problemdiagnose und Lösungsansätze zu erarbeiten. Dann rotieren alle Teilnehmer:innen: Wer bei “Finanzierung” war, geht zu “Methoden” und kommentiert deren Vorschläge. Nach zwei Runden Rotation kommen alle zusammen und erstellen ein gemeinsames Dokument mit konkreten Handlungsempfehlungen. Das kann auch publiziert werden, soll aber vor allem als Arbeitsgrundlage für Folgetermine dienen. Praktischer Nutzen: Statt über Probleme zu reden, wird an Lösungen gearbeitet. Statt Einzelmeinungen werden Gruppenperspektiven entwickelt. Und statt akademischer Selbstbespiegelung entsteht politische Handlungsfähigkeit.

Eine vierte Möglichkeit wäre die “Inverse Keynote”: Drei Nachwuchsforschende erhalten die prime time, um ihre riskantesten, unfertigsten Ideen zu präsentieren – aber ohne CV-Verlesung, ohne Hierarchiemarkierung, ohne die üblichen Demutsgesten. Stattdessen werden ihre Gedanken vorgestellt: “Hier ist eine Idee über XY, die noch nicht zu Ende gedacht ist.” Etablierte Wissenschaftler:innen im Publikum können reagieren, aber nicht als Autoritäten, sondern als Gesprächspartner:innen. Denn das Problem der herkömmlichen “Umkehrung” ist: Sie reproduziert das Reputationssystem, nur mit umgekehrten Vorzeichen. Plötzlich wird “Nachwuchs” zum neuen Fetisch, zur neuen Form der Exotisierung. Besser wäre: Ideen sprechen für sich, egal wer sie hat. Das bedeutet nicht Entmenschlichung, sondern Entautoritarisierung. Menschen bleiben wichtig – als Denkende, Fragende, Zweifelnde. Aber ihre akademischen Titel, ihre Publikationslisten, ihre Institutionen werden irrelevant. Das wäre paradoxerweise menschlicher als das jetzige System, weil es Menschen nicht auf ihre Reputation reduziert, sondern sie als intellektuelle Wesen ernst nimmt. Konkret: Keine Vorstellung der Personen, sondern der Probleme. Keine Ehrerbietung für Alter oder Status, sondern Neugier auf Argumente.

IV. Was passieren würde, wenn es keine Keynotes mehr gäbe

Stellen wir uns vor, Konferenzen hätten keine Keynotes mehr. Was würde passieren? Zunächst ein Schock vielleicht: Die gewohnten Orientierungspunkte fielen weg, die Hierarchien würden sichtbar instabil, die Erwartungsstrukturen kollabierten. Aber nach dieser Phase der Verwirrung könnte etwas Neues entstehen: eine Wissenschaft, die sich nicht mehr über Stars definiert, sondern über Probleme. Eine Konferenz ohne Keynotes wäre gezwungen, ihre Relevanz anders zu begründen – nicht durch die Prominenz ihrer Sprecher:innen, sondern durch die Dringlichkeit ihrer Fragen, die Innovativität ihrer Methoden, die Kollektivität ihrer Erkenntnisprozesse. Die Aufmerksamkeit würde sich von Personen auf Inhalte verschieben, von Autorität auf Argumente, von Performance auf Substanz. Das akademische Starsystem würde erodieren, weil ihm die wichtigste Bühne genommen wäre. Plötzlich müssten Ideen für sich selbst sprechen, statt durch die Aura ihrer Urheber:innen legitimiert zu werden.

Langfristig könnte diese Entwicklung eine andere Wissenschaftskultur hervorbringen. Ohne die Aussicht auf Keynote-Ruhm würde sich die Motivation der Forschenden verschieben: Weg von der Selbstdarstellung, hin zur Problemlösung. Weg von der Konkurrenz um Aufmerksamkeit, hin zur Kooperation bei der Erkenntnisproduktion. Die Energie, die heute in die Pflege akademischer Reputation fließt, könnte in die Verbesserung der Forschungsmethoden investiert werden. Ohne die zentralistische Dramaturgie der Keynote würden Konferenzen experimenteller, risikobereiter, innovativer werden müssen. Sie könnten nicht mehr auf die Zugkraft großer Namen setzen, sondern müssten durch neue Formate, interessante Probleme, überraschende Konstellationen überzeugen. Das würde die gesamte event economy der Wissenschaft transformieren: Statt weniger großer, teurer Konferenzen mit Staraufgebot gäbe es viele kleine, billige, aber intensive Arbeitstagungen mit echtem Erkenntnisgewinn.

Aber diese Vision hat auch ihre Schattenseiten. Ohne Keynotes könnten Konferenzen ihre Orientierungsfunktion verlieren. Gerade für Nachwuchswissenschaftler:innen sind Keynotes oft die ersten Berührungspunkte mit den großen Debatten ihrer Disziplinen. Ohne diese Überblicksvorträge könnte Fragmentierung entstehen: Viele kleine Zirkel, die sich nur noch mit ihren Spezialthemen beschäftigen, aber den Blick für größere Zusammenhänge verlieren. Die Keynote erfüllt auch eine pädagogische Funktion – sie zeigt, wie man komplexe Gedanken strukturiert präsentiert, wie man ein Publikum führt, wie man Wissen zugänglich macht – bestenfalls. Diese Lerneffekte würden in einer vollständig egalitären Konferenzkultur anders funktionieren. Möglicherweise würde auch die Qualität leiden: Wenn alle gleich wichtig sind, wird niemand mehr besonders wichtig genommen. Die demokratische Vision könnte in intellektuellem Relativismus enden.

Vielleicht ist die keynote-lose Zukunft auch schlicht unrealistisch, weil sie menschliche Grundbedürfnisse ignoriert. Menschen brauchen Vorbilder, Orientierungsfiguren, Geschichten von exemplarischen Karrieren. Die Keynote befriedigt nicht nur akademische Eitelkeit, sondern auch tiefere anthropologische Bedürfnisse nach Bewunderung, Identifikation, Inspiration. Eine Wissenschaft ohne solche symbolischen Figuren könnte kalt und abweisend wirken, unfähig, Nachwuchs zu motivieren oder gesellschaftliche Aufmerksamkeit zu generieren. Möglicherweise ist die Kritik an der Keynote also selbst eine Form akademischer Hybris – der Versuch, menschliche Allzumenschlichkeiten durch rationale Systemkritik zu überwinden. Am Ende könnte die keynote-lose Zukunft nicht die erhoffte Demokratisierung der Wissenschaft bringen, sondern nur ihre weitere Marginalisierung in einer Gesellschaft, die Aufmerksamkeit anders organisiert: durch Influencer, Celebrities, Brands. Ohne eigene Stars wäre die Wissenschaft vielleicht noch unsichtbarer als sie ohnehin schon ist.

V. Über die Keynote hinaus: Was sich ändern könnte

Die Keynote-Kritik ist nur der Anfang. Wenn wir ernst machen mit der Infragestellung dieser scheinbar harmlosen Rituale, könnte das weitreichende Folgen für die gesamte Wissenschaftskultur haben. Denn die Logik der Keynote – die Konzentration von Aufmerksamkeit auf Einzelpersonen – durchzieht das ganze System. Sie zeigt sich in der Art, wie wir über “Koryphäen” sprechen, wie wir Forschung um Autorennamen organisieren, wie wir in Rezensionen schreiben: “Müller zeigt…”, “Schmidt argumentiert…”, als wären Ideen Privateigentum. Was wäre, wenn wir stattdessen über Probleme, Methoden, Erkenntnisse sprächen – ohne die ständige Personalisierung?

Das würde auch die Publikationskultur verändern. Aktuell funktioniert wissenschaftliches Publizieren nach dem Prinzip der individuellen Zuschreibung: Ein Autor, ein Paper, eine Idee. Aber die interessantesten Erkenntnisse entstehen meist in Gesprächen, durch Zufälle, in kollektiven Prozessen. Warum nicht Publikationsformen entwickeln, die das widerspiegeln? Kollektive Autorenschaft könnte normal werden, Texte könnten sich über mehrere Iterationen entwickeln, Ideen könnten als work in progress zirkulieren statt als fertige Produkte präsentiert zu werden. Die Wikipedia zeigt, dass so etwas funktioniert – warum nicht in der Wissenschaft?

Auch das Verhältnis zwischen Lehre und Forschung könnte sich wandeln. Die Keynote-Logik reproduziert sich im Seminar: Dozent:innen sprechen, Studierende hören zu. Aber was, wenn Seminare zu echten Forschungsworkshops würden? Wenn Studierende nicht nur über Forschung lernen, sondern Forschung machen? Wenn die Grenze zwischen Lehrenden und Lernenden durchlässiger wird? Das würde nicht nur die Lehre verbessern, sondern auch die Forschung – denn neue Perspektiven entstehen oft dort, wo Hierarchien aufgeweicht werden.

Interessant wird auch die Frage der Expertise. Die Keynote suggeriert, dass es Menschen gibt, die “alles” über ein Thema wissen. Aber in einer Zeit exponentiell wachsenden Wissens wird diese Vorstellung absurd. Niemand kann mehr den Überblick behalten, auch keine Koryphäe. Was wir brauchen, sind nicht Allwissende, sondern gute Netzwerker – Menschen, die verschiedene Wissensbestände verknüpfen können. Das würde die Rolle des Intellektuellen fundamental verändern: Weg vom einsamen Genie, hin zum Knotenpunkt in einem Wissensnetzwerk.

Schließlich könnte sich das Verhältnis zwischen Wissenschaft und Öffentlichkeit wandeln. Die Keynote-Kultur produziert “Sprecher” der Wissenschaft – meist männliche Professoren über 50, die in Talkshows die Disziplin “vertreten”. Aber warum sollten sie das tun? Warum nicht verschiedene Stimmen zu Wort kommen lassen? Warum nicht die Unsicherheiten und Kontroversen zeigen statt nur die scheinbaren Gewissheiten? Eine Wissenschaft ohne Starkult wäre weniger medienwirksam, aber vielleicht nahbarer und aufrichtiger? Und vielleicht würde sie deshalb mehr Vertrauen gewinnen, nicht weniger.

Und das liegt an uns – nicht an “der Wissenschaft” als abstraktem System, sondern an den konkreten Menschen, die jeden Tag entscheiden, wie sie Konferenzen organisieren, wie sie lehren, wie sie publizieren. Diese Banalität ist eigentlich radikal: Die meisten Veränderungen passieren nicht durch große Manifeste oder theoretische Durchbrüche, sondern durch die Akkumulation unspektakulärer Entscheidungen. Ein Organisator, der das Keynote-Budget halbiert. Eine Professorin, die ihre Seminare als Diskussionsrunden statt als Vorlesungen gestaltet. Ein Verlag, der anonyme Peer Reviews durch offene Kommentierungsverfahren ersetzt. Diese Mikroentscheidungen sind politischer als jede Grundsatzdebatte, weil sie die Realität verändern statt nur über sie zu reden. Vielleicht ist das eine unbequeme Erkenntnis: Wir haben mehr Macht als wir zugeben wollen – und deshalb auch mehr Verantwortung. Es ist einfacher, “das System” zu kritisieren als die eigene nächste Entscheidung anders zu treffen.

On Finding Your Place at the Academic Table

I’ve been watching my students struggle with something that shouldn’t be difficult, and it’s starting to irritate me. Not them—they’re doing their best with an impossible task. What irritates me is how we’ve managed to turn one of the most natural human activities—having a conversation—into an exercise in intellectual contortion that would impress a medieval theologian.

Take one student, who spent our last meeting apologizing for their research interests. They’d discovered something genuinely fascinating about how Syrian refugees navigate bureaucratic systems, something that could reshape how we think about agency and institutional power. But instead of telling me about their insights, they spent twenty minutes explaining why their work wasn’t as important as what other scholars had done, why their methods weren’t as sophisticated as so-and-so’s, why their theoretical framework was “just building on” established thinkers. By the time they finished diminishing their contribution, I’d forgotten why I’d been excited about their project in the first place.

This pathological humility isn’t serving anyone. We’ve somehow convinced graduate students that scholarly conversation requires a kind of intellectual genuflection—bow before established authorities, apologize for your existence, then maybe, if you’re very good and very careful, add one tiny brick to the cathedral of knowledge. The metaphor breaks down immediately. Cathedrals aren’t built through conversation; they’re monuments to institutional power. Real conversations happen in kitchens, on park benches, in places where people feel comfortable enough to think out loud.

The problem starts with how we teach literature reviews. We treat them like diplomatic exercises where the goal is to acknowledge everyone important without offending anyone powerful. Students learn to write sentences like “While Smith’s groundbreaking work has contributed significantly to our understanding, and Jones’s influential framework has provided valuable insights, this study seeks to modestly extend the conversation by examining…” By the time you get to the actual point, everyone’s asleep. Including the writer.

What if we tried something different? What if we taught students to write like they actually think? Sarah Ahmed does this beautifully—they’ll start a paragraph with something like “I am struck by how…” or “This makes me wonder…” They’re not apologizing for having thoughts; they’re inviting us to think with them. That’s what conversation actually looks like. Someone notices something, gets curious, shares what they’re seeing, and invites others to look more closely.

The “gaps” obsession particularly annoys me—and I’ve written about this here before. We’ve trained students to scan scholarship like archaeologists looking for empty sites to excavate. “No one has studied X” becomes the magical incantation that justifies existence. But the most interesting research rarely emerges from obvious gaps. It comes from noticing that something everyone thinks they understand actually doesn’t make sense. Toni Morrison didn’t write “Beloved” because no one had written about slavery. They wrote it because the stories being told weren’t true to the complexity of the experience.

I’ve started asking students different questions. Instead of “What’s missing from the literature?” I ask “What’s bothering you about how people talk about this topic?” Instead of “What’s your contribution?” I ask “What do you notice that others seem to miss?” The shift is subtle but significant. It moves from deficit thinking to curiosity, from gaps to puzzles, from contribution to perspective.

The voice problem is trickier. Academic writing demands this weird register where you sound authoritative without seeming arrogant, innovative without being disrespectful, passionate without being unprofessional. It’s like being asked to perform enthusiasm at a funeral. Most students solve this by writing in the passive voice and hedging everything: “It might be suggested that there could potentially be some indication that…” This isn’t humility; it’s hiding.

Real intellectual humility looks different. It’s Octavia Butler saying something like “I’m not trying to predict the future; I’m trying to prevent it.” It’s James Baldwin writing something like “Not everything that is faced can be changed, but nothing can be changed until it is faced.” These writers take full responsibility for their ideas while acknowledging the provisional nature of knowledge. They’re not hedging; they’re being honest about what thinking actually involves.

Some of the so-called digital natives—or so we think of them—in our seminar rooms understand this intuitively. They’re used to building ideas collaboratively, testing thoughts in public, refining arguments through response and revision. They know that knowledge emerges through interaction, not individual genius. Yet we ask them to write as if they’re Victorian scholars working alone in libraries, occasionally citing dead authorities for legitimization.

Maybe the conversation metaphor is wrong entirely. Conversations are temporal, ephemeral, responsive. Academic writing feels more like contributing to an archive—adding your voice to an ongoing record that others will encounter later, asynchronously, in contexts you can’t predict. That’s both more intimidating and more liberating than conversation. You can’t control how others will read your work, but you also don’t need to manage their immediate reactions.

The students who eventually find their scholarly voice seem to share something: they stop trying to sound like scholars and start trying to think clearly about things that matter to them. They realize that academic writing, at its best, is just a particular way of paying attention—systematic, rigorous, accountable to evidence, but still fundamentally human. They learn to trust their curiosity while developing tools to pursue it responsibly.

This might be the real challenge: helping students understand that becoming a scholar doesn’t require abandoning their intellect or personality or experiences or interests. It requires bringing these aspects more fully into engagement with ideas that deserve serious attention. The conversation they’re joining isn’t happening in some rarefied atmosphere where only perfect thoughts are welcome. It’s happening wherever people are trying to understand something important about the world. They belong in that conversation not because they’ve earned the right through proper citations, but because they’re curious, thoughtful humans with something to contribute.

The rest is just learning the craft.

Architectural Thinking: Reimagining the Broken Structures of Academic Writing

Last semester, I watched a brilliant student struggle with her dissertation structure for months. Despite having compelling research and sharp insights, she couldn’t find a way to organize her material that felt both intellectually honest and academically legitimate. “I feel like I’m trying to force my ideas into someone else’s house,” she told me, “and none of the rooms are the right shape.” Her frustration stayed with me, crystallizing something I’ve been uncomfortable with for years: the rigid architectural templates we impose on academic writing often undermine rather than serve the knowledge we’re trying to create.

The metaphor of architecture seems particularly apt when thinking about academic writing. We speak of “building” arguments, creating “frameworks,” establishing “foundations,” and designing “structures” for our texts. This language isn’t accidental—it reflects deep assumptions about what scholarly writing should be: solid, stable, carefully engineered, and built to withstand scrutiny. There’s value in this approach, certainly. But I’ve come to believe that our attachment to conventional textual architectures often constrains thinking rather than supporting it. Perhaps it’s time to question the blueprints we’ve inherited.

Gloria Anzaldúa’s groundbreaking work “Borderlands/La Frontera” fundamentally challenged these inherited structures. By blending poetry, memoir, historical analysis, and cultural theory—switching between languages and refusing to separate the personal from the theoretical—she created a form that accurately reflected her content: the experience of existing between cultures, languages, and identities. Reading her work for the first time was revelatory for me. It demonstrated how structure itself could be an argument, how the organization of a text could do intellectual work beyond merely containing ideas. Why, I wondered, was this approach so rare in academic writing?

The conventional architecture of scholarly writing emerged from specific historical contexts and served particular purposes. The familiar IMRaD structure (Introduction, Methods, Results, and Discussion) reflected positivist assumptions about knowledge production that made sense for certain kinds of scientific inquiry. The problem isn’t that these structures exist—it’s that they’ve been universalized and naturalized, applied uncritically across disciplines with wildly different epistemological foundations. We’ve mistaken one architectural style for the only legitimate way to build scholarly knowledge.

I recall submitting an article to a prestigious journal early in my career, experimenting with a more recursive, reflective structure that mirrored the iterative nature of my research process. The rejection letter was polite but clear: the reviewers found the structure “confusing” and suggested I reorganize into standard sections. I complied, of course—tenure was at stake—but something important was lost in the translation. The conventional structure flattened the complexity of the research experience, presented as linear and orderly what had been anything but. I’ve often wondered how much knowledge we lose by forcing complex ideas into standardized forms.

Adrienne Rich wrote about “the dream of a common language,” a phrase that has always resonated with me. Academic writing conventions aspire to something similar—a common architectural language that allows scholars to communicate across differences. There’s undeniable utility in this. When we know what to expect from a text’s organization, we can more easily locate information, evaluate arguments, and engage with ideas. But whose conventions have become standard? Whose ways of organizing knowledge have been marginalized? These questions seem especially important for those of us working in interdisciplinary fields, where no single structural approach fully accommodates the complexity of our research.

Perhaps what we need is a more flexible understanding of scholarly architecture—less like modern skyscrapers with their rigid grids and more like vernacular architectures that respond to local conditions, materials, and needs. I think of Zora Neale Hurston’s methodological innovations in “Mules and Men,” where she created a nested structure of stories within stories that perfectly captured the oral tradition she was documenting. The form wasn’t arbitrary or decorative; it was integral to the knowledge being conveyed. What might academic writing look like if we approached structure with this kind of intentionality and creativity?

To be clear, I’m not suggesting we abandon structure altogether—that would be neither possible nor desirable. Architecture, whether in buildings or texts, is necessary. It creates spaces for ideas to inhabit, pathways for readers to follow, and points of connection between concepts. But I am suggesting that we might hold our structural conventions more lightly, seeing them as possibilities rather than requirements, as tools rather than rules. The question shouldn’t be “Does this follow the standard format?” but rather “Does this structure serve the knowledge I’m trying to create and share?”

This shift requires a different kind of architectural thinking—one that begins with the specific knowledge being developed rather than with predetermined blueprints. It means asking: What structure would best reflect the relationships between these ideas? How can the organization of this text enact the theoretical frameworks I’m employing? What would make this argument not just clear but compelling, not just logical but alive? These questions invite us to see structure as an integral part of the argument rather than a neutral container for it.

I’ve been experimenting with this approach in my own writing, with mixed results. Some readers find it refreshing; others find it disorienting. I sympathize with both reactions. When we encounter unfamiliar textual architectures, we must work harder as readers, developing new navigational strategies rather than relying on established conventions. Is it fair to ask this of our audiences? I’m not always sure. But I’m increasingly convinced that some ideas simply cannot be adequately expressed within conventional structures, that some knowledge requires new forms to be fully realized.

Susan Leigh Star’s work on “boundary objects” offers a useful concept here. Boundary objects are things that maintain a common identity across different contexts while being adapted to local needs—flexible enough to accommodate different perspectives but robust enough to maintain coherence. Perhaps academic structures could function more like boundary objects: recognizable enough to facilitate communication while adaptable enough to accommodate diverse ways of knowing. This seems particularly important in an era of increasing interdisciplinarity, when scholars are working across traditional boundaries and bringing different structural logics into conversation.

I think about my struggling student often, wondering what might have happened if she’d been encouraged to design a structure specifically for her unique research rather than trying to fit her ideas into pre-existing containers. What knowledge might have emerged more clearly? What connections might have become visible? What arguments might have been more persuasive? These aren’t just questions about writing; they’re questions about how we produce, validate, and share knowledge in the academy.

The stakes of these questions extend beyond individual scholarly careers. Academic architecture reflects and reinforces hierarchies of knowledge—determining whose ideas count, which approaches are legitimate, what kinds of evidence matter. Audre Lorde’s famous statement that “the master’s tools will never dismantle the master’s house” seems relevant here. If we’re serious about creating more inclusive, equitable academic spaces, we must reconsider not just what we study but how we structure what we learn. The architecture of our texts matters because it shapes what can be thought, said, and known.

I’m not naive about institutional constraints. Journals have submission guidelines, dissertation committees have expectations, and hiring committees have limited time to evaluate unfamiliar formats. Working within these realities is necessary for many scholars, especially those without the protection of tenure or institutional prestige. But even small architectural experiments—a reflexive section here, a narrative interlude there—can create spaces for different kinds of knowledge to emerge. And those with more security can push harder, creating precedents that make space for others.

Ultimately, what I’m advocating is a more conscious, intentional relationship to the structures of academic writing—one that recognizes their power and approaches them as creative possibilities rather than fixed requirements. In architecture, form follows function; in academic writing, structure should follow substance. Our ideas deserve houses built to their exact specifications, with rooms that accommodate their particular shapes. Isn’t it time we became more thoughtful builders?

The Subtle Art of Crafting Researchable Questions: Beyond Academic Formulas

I’ve been thinking a lot about research questions lately—not just as academic exercises, but as genuine attempts to make sense of the world. Last week, as I was guiding my graduate students through their thesis proposals, I realized something: most of us struggle profoundly with formulating questions that are both intellectually rigorous and practically researchable. We either swing wildly toward the grandiose (“How has globalization transformed human consciousness?”) or retreat into the painfully narrow (“How many times does the word ‘sovereignty’ appear in Johnson’s 2018 speech?”). Finding that middle ground—that sweet spot where curiosity meets feasibility—seems to elude even the most brilliant minds.

This challenge reminds me of Donna Haraway’s concept of “situated knowledges,” which has shaped my thinking since I first encountered it in graduate school. Haraway argues that all knowledge is produced from somewhere—from a specific position that is never neutral or all-seeing. Our research questions emerge from our particular locations in the world, colored by our experiences, assumptions, and blind spots. I think this perspective offers a valuable entry point into thinking about what makes a question not just answerable but worth answering. When we acknowledge the situatedness of our inquiries, we can more honestly assess both their limitations and their potential contributions.

Perhaps what makes question formulation so difficult is that it requires holding contradictory impulses in tension. A good research question must be specific enough to be feasible yet open enough to yield meaningful insights. It must be original yet grounded in existing scholarship. It must reflect your personal interests while speaking to broader academic and social concerns. I remember spending months iterating through versions of my dissertation question, each attempt feeling either too ambitious or too trivial. My advisor finally told me something I now share with my own students: “Your question won’t be perfect. It just needs to be good enough to guide meaningful research.”

That advice shifted something for me. I started to see the research question not as a perfect crystallization of intellectual brilliance, but as a tool—imperfect but useful, provisional but necessary. Rebecca Solnit captures this sensibility beautifully in her essay “Woolf’s Darkness,” where she writes about the value of uncertainty: “To me, the grounds for hope are simply that we don’t know what will happen next, and that the unlikely and the unimaginable transpire quite regularly.” Good research questions, I’ve come to believe, embrace this productive uncertainty. They open spaces for exploration rather than prescribing conclusions.

What does this look like in practice? I think it means moving beyond the formulaic approach often taught in research methods courses. Those approaches have their place—particularly for students learning the basics of research design—but they can sometimes flatten the intellectual vibrancy that makes research worthwhile. Sara Ahmed’s work offers a compelling alternative. In her book “Living a Feminist Life,” she demonstrates how research questions can emerge organically from our encounters with the world, from moments when something doesn’t quite make sense or when accepted explanations feel insufficient. Her questions arise from her lived experience while simultaneously challenging dominant frameworks of understanding.

I’ve noticed in my own work that my most productive questions often begin not with grand theoretical ambitions but with genuine puzzlement. Why did this particular policy intervention succeed when similar ones failed? How do researchers in different disciplines approach the same phenomenon so differently? What explains the persistent gap between stated institutional values and actual practices? These questions don’t necessarily follow the textbook formula of independent and dependent variables, but they’ve led me to insights I couldn’t have anticipated.

The challenge, of course, is transforming these initial curiosities into questions that can actually guide systematic inquiry. This is where craft comes in. I sometimes think of it as sculpting—starting with a rough shape and gradually refining it, removing excess material until the essential form emerges. It requires patience and a willingness to discard earlier versions that no longer serve. It also requires a certain comfort with imperfection, with the knowledge that even our most carefully crafted questions will inevitably miss something important.

I used to think this kind of question formulation was a preliminary stage of research—something to get right before the “real work” began. Now I see it as an ongoing process, one that continues throughout a project as our understanding deepens and shifts. Judith Butler’s approach in “Gender Trouble” exemplifies this iterative process. Her central questions about gender and performance weren’t formulated once and then methodically answered; they evolved as her exploration progressed, opening new lines of inquiry that she couldn’t have anticipated at the outset.

Maybe that’s the most important shift in my thinking: seeing research questions not as fixed destinations but as evolving companions in a journey of understanding. They guide us while being transformed by what we discover along the way. The art lies not in formulating the perfect question from the start, but in developing questions that are alive enough to grow with us.

As I work with students now, I try to encourage this more dynamic, emergent approach to questioning. Rather than pushing them toward premature precision, I ask: What genuinely puzzles you? What assumption would you like to test? Whose perspective is missing from current accounts? What would make your research matter to someone besides yourself and your committee? These questions don’t always yield immediately researchable formulations, but they often lead to more authentic and meaningful inquiry.

I’ve found that the most researchable questions often emerge at the intersection of disciplinary conversations. Kimberlé Crenshaw’s development of intersectionality theory arose from her recognition that neither feminist theory nor critical race theory alone could adequately address the experiences of Black women. Her research questions emerged from this gap, this space between established frameworks. Similarly, some of the most compelling work I’ve seen from students happens when they notice connections or contradictions between different bodies of literature and allow themselves to be puzzled by them.

These reflections don’t offer a neat formula for crafting researchable questions—and maybe that’s the point. Perhaps what we need isn’t another technique but a different orientation: more curiosity and less certainty, more patience with the messy process of refinement, more trust in our capacity to follow questions where they lead. Research, at its best, isn’t about confirming what we already know but about venturing into uncertainty with thoughtful questions as our guides. Doesn’t that seem like a more honest way to approach knowledge-making? I think so, though I’m still figuring it out myself.