A few years back, I was on a panel on poetry reviewing at the annual conference of the Association of Writers and Writing Programs. After all of us panelists presented our ideas, the question and answer session started, and it started with a bang. Right away, a question from the back of the room: What gives you the right to judge? Nothing subtle about that. I recall much less our responses to the question—which, I believe, consisted of declarations that readers did not have to believe us but rather that they needed to be persuaded by our arguments—than the question itself, and my continued reflection on the question has culminated in a killer retort that, of course, I have only come up with recently: What gives you the right?
See, the inquisitor was not some undergraduate English-philosophy double major—I’m allowed to say this as I was one—but in fact was, and still is, a well-published poet and the editor of an important small poetry press. Of course, we poetry critics were making judgments of value, but so was he—and, indeed, he was making judgments of even greater consequence than many of us on the panel. Here, I will not discuss—except, perhaps, tangentially—what gives anyone the right to judge poetry (everyone does, thank goodness); rather, I wish to explore a key assumption implicit in the questioner’s querulous query—that poetry critics are the ones who judge poetry—and I will argue, and suggest a useful method by which, all can, and likely should, more fully engage the critical aspects of their relationship to poetry and more acutely articulate the decisions and the decision-making processes that constitute and create poetic evaluation.
Evaluation occurs all the time in poetry. Sure, published books of poems are evaluated by critics. But manuscripts are evaluated by publishers. Journal submissions are evaluated by editors. Contest judges evaluate poetry, and so do anthologists. The faculty of poetry graduate programs evaluate applicants’ writing samples. Readers determine over and over again what books and poems to pick up or put down, and when. Workshop participants evaluate each other’s work. Indeed, the solitary, working poet is one of the most active critics, constantly making evaluative assessments regarding a poem in process—is it beautiful? is it true? should a poem be beautiful? should it be true? Even “ever indolent,” negatively capable John Keats noted to his friend Richard Woodhouse that “[m]y judgment…is as active while I am actually writing as my imagin[ation].” Even Andre Breton corrected the results of his sessions of psychic automatism, having determined some of the outcomes not automatic enough. In poetry, everyone’s a critic. Seriously.
But few participants in the world of contemporary poetry seem to understand the vital significance of critical assessment in their relationship(s) to poetry. In Contingencies of Value (Harvard, 1991), Barbara Herrnstein Smith notes the “exile of evaluation” in the literary academy. I’d say that something similar has taken place in contemporary poetry: criticism has been exiled, packed off to be the task—if not solely, then primarily—of critics. It is time to end that exile, but not by assigning some new homeland, some new place to stow evaluation—where would that be?—but rather by acknowledging the ubiquity of critical evaluation and by erasing unnecessarily restrictive borders, including those separating criticism from all the other activities necessary for a thriving literary culture; literary evaluation from sociological / anthropological research; and creative writing from rhetoric and composition, a field which is much more mature when it comes to thinking about assessment.
Here is what I propose. In order for all those engaged in the assessment of poetry to become better, more informed and aware critics, those groups making evaluative judgments of poetry—co-editors, panels of judges, fellow faculty, etc—should record the conversations they have in which they are making evaluative decisions, transcribe the recording, analyze the transcript in order to articulate as accurately as possible their group’s decision-making dynamics, focusing especially on what evaluative criteria emerged, and then publish that analysis in a suitable, relevant venue.
This may seem strange. It needn’t. This kind of work is taking place in other fields. For example, in How Professors Think: Inside the Curious World of Academic Judgment (Harvard, 2010), Michèle Lamont employs sociological and anthropological techniques to examine the typically confidential processes of peer review in order to uncover the dynamics of academic decision making and to explore the diverse understandings of terms such as “originality” and “excellence.” In What We Really Value: Beyond Rubrics in Teaching and Assessing Writing (Utah State, 2003), Bob Broad develops a method he calls “dynamic criteria mapping” in order to identify and articulate the actual values that inform the assessments of writing done in undergraduate composition courses.
And this kind of work is beginning to take place in poetry, as well. In “How We Value Poetry: An Empirical Inquiry” (College English 73.2 (Nov. 2010)), Bob Broad and I apply dynamic criteria mapping to a study of the evaluative deliberations of a group of poets, and make some intriguing findings—the main ones being that such investigations in fact could be accomplished and could be revelatory. In “The Poetry of Evaluation: Helping Students Explore How They Value Verse,” an essay in Creative Writing and Education (edited by Graeme Harper, Multilingual Matters, 2014), Bob and I report on the successful use of this method in an undergraduate poetry class. And Bob’s and my collaboration itself has grown out of a consideration of works—among them, Patrick Bizzaro’s Responding to Student Poems: Applications of Critical Theory (NCTE, 1993) and H. L. Hix’s Wild and Whirling Words: A Poetic Conversation (Etruscan, 2003)—which in various ways have suggested that the analysis of evaluative conversation is an important next step in our emerging ideas of how we evaluate poetry.
This gradual shift toward the analysis of evaluative conversations, I believe, also grows out of the simple fact that such conversations are already happening. Something like this process might occur when, say, a panel of—often diverse—judges writes a citation for a contest winner, or when, say, an editorial board needs to write for Poet’s Market a description of the kind of work they publish. What I’m calling for is a more conscientious, concerted, and even empirical effort to investigate and articulate critical work that is being done in regard to poetry virtually everywhere, almost all the time.
It needs doing. The poetry community has been fortunate to have a handful of watchdog groups, such as Foetry and VIDA: Women in Literary Arts, analyzing information, detecting trends, sharing results, and—when stated criteria don’t align with actual choices made, or when problematic decisions are made based on the sometimes subtle / sometimes painfully obvious criteria of personal connection, gender, and/or race—demanding action. The world of contemporary poetry needs more such groups. This is Big Data criticism, and while such criticism is not intended to and should not replace other modes of critical engagement, it is a vital contribution to the critical conversation, calling all to be more thoughtful—and even ethical—in their engagements with poetry.
This work needs doing even at a more straightforwardly “literary” level. Take, for example, the introductory remarks made by Cole Swensen and David St. John in their American Hybrid: A Norton Anthology of New Poetry (Norton, 2009). For all their obvious involvement with assessment and selectivity, Swensen and St. John have an uneasy relation with evaluation. In his introduction, St. John justifies the editorial decision to not “champion individual poets as special exemplars of hybridization” by noting that one of the points of the collection is that “all aspects and variants of hybridization in American poetry are of equal and lasting value.” But such a claim is belied by the anthology itself. After all, not all hybrid poets were included. And not all poems by the anthology’s poets were included. What was the basis for the selections? What were the evaluative criteria?
Interestingly, in lieu of answers to such questions, Cole Swensen seems to recommend that readers of hybrid poetry engage in an effort to discover their own criteria. Swensen states that “hybridity is of course in itself no guarantee of excellence,” and, noting that hybridity’s willingness to mix and meld values and criteria from both “experimental” and “conservative” poetries—among them that poems be “well-made, decorous, traditional, formal, and refined, as well as spontaneous, immediate, bardic, irrational, translogical, open-ended, and ambiguous”—she argues that hybridity “make[s] it harder to achieve consensus or even to maintain stable critical criteria.” Instead of offering criteria, hybridity—supposedly—instead “put[s] more responsibility on individual readers to make their own assessments, which can in turn create stronger readers in that they must become more aware of and refine their own criteria.” But regardless of the seemingly limitless and contradictory values that could have been taken into account, questions persist because choices in fact were made. What were Swensen and St. John’s criteria? They must have had some. What was their decision making process? They must have had one. How did they pick one poet or poem over another? They certainly did this…
In “Show Your Work,” an essay published on the Poetry Foundation’s web site (available at http://www.poetryfoundation.org/article/186047), poet and editor Matthew Zapruder notes:
I remember that my geometry teacher used to write at the top of my tests, in giant capital letters, SHOW YOUR WORK! This is what I often find myself silently screaming at the pages of yet another diffuse review. I believe that as a reader I am, like almost anyone…much more interested in the kind of thinking that led to the judgments of quality than the judgments themselves. We cannot have great poetry without great poetry criticism, so critics, please, do your job. We are counting on you.
I like this sentiment, but I’d also add they specifically try the method I’ve outlined above, and I’d add after “critics” “anthologists, editors, teachers, workshop participants, readers, and all of those engaged in the critical assessment of poetry.” In the case of American Hybrid, a treasure trove of information has been lost due to the fact that Swensen and St. John did not show their work by carefully and conscientiously recording, analyzing, and sharing the results of their many conversations about what poems to include in their anthology. But, more than this, we should by now, in our post-postmodern era, be able to see clearly that to do the work of canon formation—especially in the form of textbook / anthology creation—without articulating what that particular canon specifically means or represents is either a foolish delusion or else purely a power move. Either way, Swensen and St. John indeed leave it to readers-turned-critics to determine their criteria. (It is my opinion that Swensen and St. John did not do a very good job of selecting work, and I say so in “On American Hybrid: A Critical Conversation” (Pleiades 30.2 (2010)), which I co-authored with Jay Thompson.)
It may be that such investigation and articulation is overly demanding, would take up too much time. While I don’t think this is the case, even if so, engaging this process is worth it as its benefits are so great and numerous. It will make all of those conversations—so often involving, sometimes trying, occasionally excruciating—about poetic value more fruitful by having them produce not only the decisions but also articulations of the criteria behind those decisions. Workshops will become more productive. Editorial boards will be able to publish more accurate descriptions of the kind of work they accept, and program faculty will be able to articulate more clearly what they are looking for in writing samples. (And, as a result, working poets will be able to make more informed decisions about where to put their time and resources.) But more broadly and perhaps more powerfully, such work will bring to light the activity of critical assessment which is at the center of so much of what we do but which—perhaps because it is so ubiquitous, and occurs so often so speedily and subconsciously—we don’t recognize. It will make us more aware of the ways and means by which we make our assessments, offering us the opportunity, if deemed necessary, to revise or demand revision of evaluative processes and criteria.
Everyone’s a critic. Good. Now, let’s all do the work required to be the best critics possible.
MICHAEL THEUNEis professor of English at Illinois Wesleyan University in Bloomington. His criticism and essays have appeared in numerous journals, including Jacket and College English. Theune has served as contributing editor for Pleiades, and currently is the review essay editor for Spoon River Poetry Review. He is editor of Structure & Surprise: Engaging Poetic Turns, and co-editor of "Voltage Poetry" (voltagepoetry.com).