More Thoughts on Poetry

I have had a breakthrough in my thoughts on the nature of poetry. To recap, in the last episode of this blog, I stated that over the past twenty years or so, I had somehow decided that unless I really knew what poetry was, I had no business writing it. Despite having taught more poetry than you can shake a spear at, I didn’t feel I could actually define poetry. It couldn’t be just the use of creative language, because that’s used in the best prose; nor could I say it was in the idea of moving the reader to feel a specific emotion, because that’s the motivation behind all different kinds of prose, too. What was left was simply the form of poetry, which meant that a poem is a poem because the person who created it says it’s a poem and delineates its appearance, using line breaks and stanzas, in such a way to suggest that it is a poem.

That’s fair, of course, but not very satisfying. So I came up with the idea of busting apart the entire idea of genre, and asking if it really matters what we call a piece of writing. Whether it’s prose or poetry, if we feel moved by it, if it elicits a vivid picture or sensation or thought, then it’s good writing. But something in me was left unsatisfied, and so I did what I always do when I have a tricky little intellectual problem: I simply tried to forget about it.

But a few days ago I had an idea about the motivation behind writing poetry. Perhaps, I postulated, that’s what really differentiates a poem from a prose piece: the writer’s motivation. By chance, I was helped along in this line of thinking–about the whole idea of why we write and read poems–from, of all things, a very fine science writer named Ed Yong.

You might remember Yong from his insightful articles on the Covid-19 pandemic, which were published in the Atlantic. I knew Yong to be an excellent writer, so when I saw his book An Immense World: How Animal Senses Reveal the Hidden Realms around Us (2022), I picked it up and read it.

But how does a book on natural science relate to poetry? Bear with me a few minutes and I’ll explain.

Yong’s book is all about the way in which animals’ perceptions are different, sometimes starkly, from our own. It’s also about how human beings have misunderstood and misrepresented the way animals perceive things for millennia because we’re so immured in our own self-contained perceptive world. In other words, by thinking of animals in purely human terms, we limit our view of them.We also limit our view of the world itself. What we perceive, Yong argues throughout the book, determines in large part what we think and how we feel–and, most important of all for my point here, how we process the world we live in.

Yong uses the term “Umwelt” throughout the book to refer to an animal’s perceptual world, a term that means “environment” in German but has taken on a new flavor thanks to the scientist Jakob von Uexküll, who first used the word in 1909 in this specific sense. A dog’s “umwelt,” then, reflects the way it perceives the world, a world in which different colors are highlighted, scents linger in the air long after their source has moved away, and so on.

So how does this all relate to poetry and why we read and write it? Simply this: I propose that a poem’s primary task is to present an Umwelt for its reader. To do this, the poet creates a piece of writing that closely reflects (if she is lucky) the way she sees the world and presents it to the reader as a gift. If the reader accepts the gift, his reward for reading the poem attentively is being able to glimpse the world afresh through an Umwelt that is different from his own. In other words, the reader gets to see the world, or at least a piece of it, through a different perceptual grid, an experience that can be entertaining, sometimes unsettling, often thought-provoking, and, at its best, revelatory.

Is this different from prose? Perhaps not too much, but I’d argue that the very choice to write a poem instead of an essay, short story, or novel indicates something–I’d say something vitally important– about the writer’s Umwelt. The other forms of writing have messages they want to relay. The poem, however, exists simply to allow its reader to step into its author’s Umwelt for a few moments in order to experience the world differently.

So there you have it. For me, at least for now, discovering why we write poems has given me a new understanding and appreciation of poetry. It means I don’t have to decide whether I like or dislike a poem, nor do I have to justify my reaction to it. Poetry simply is; there’s no more point in arguing whether a poem is good or bad than there is in arguing with my dog Flossie whether her way of experiencing the forest we walk through every morning is better than mine, or whether mine is better than hers. If I got the chance to experience the world through her senses, you can bet I’d take it. Curiosity alone would drive me to it.

At the most basic level, then, I write poetry to demonstrate how I experience the world. I read poetry to discover how other people experience the world. In the end, we read and write poetry to bridge the gap between ourselves and others. It’s about sharing our Umwelten, which, in the end, means it’s all about breaking out of our own little self-contained worlds and joining together to form a bigger, better sense of the world we live in.

Spring has finally come to Northern Michigan, where I live. One might think that would make things easier, that creative juices would flow as freely as the sap in the trees and plants that are absorbing the sunshine. But unfortunately that’s not how it works. Spring is a dicey time here, and not just because of the mud left behind by the melting of the snow. (Another thing that’s left behind is shovel-loads of dog feces, which the receding snow banks offer up as yet another sacrifice to the disappearance of winter.) The truth is that when the weather clears and the outside world looks brighter, sometimes it’s disconcerting when your internal world hasn’t kept pace. It can be depressing, because it’s hard to kick yourself in gear to get things done, and in spring time, you have no excuse not to.

So when I saw that a local store was offering a poetry workshop during the month of April in honor of National Poetry Month, I signed up for it on a whim. I don’t know whether I will write any poems as a result of this workshop, but that’s not really the point. What I’d like to happen is for me to rekindle my creative impulses, and so far, though I’m still wrestling with SI (Springtime Inertia), I think I can detect the beginning of some movement towards more of a creative flow.

But the workshop has reminded me of an important question I’ve had over the last few years–one that may be unanswerable but still deserves to be asked:

What makes good writing?

It’s a question I’ve been pondering seriously, even though it might sound like I’m being flippant. Having taught literature throughout my professional career, I should be able to answer that question without too much trouble. For example, as a writing instructor, I’d say, “Good writing is clear, succinct, and precise. It shows consideration for the reader by adhering to the commonly accepted rules of grammar, spelling, and punctuation. It connects the ideas it presents in a way that is easy to read and understand.” I think that’s a good start for a college composition class, anyway.

But clearly this will not work for most creative writing. Poets, for example, often show little consideration for their readers. In fact, I’m not sure contemporary poets actually write with readers in mind; often they seem to be jotting down notes to themselves for later reading. Not that there is anything wrong with that at all–this is, after all, why I am interested in poetry at this point in my life. I’ve realized that there are certain subjects and ideas I want to explore that are better suited for poems than for short essays like this one, and I think it’s worth the time and effort to try to articulate them in poetic form.

However, let’s get back to the question: what does make good creative writing? I am having a hard time formulating an answer. As I get older, I seem to be suffering from the reverse of the Dunning-Kruger Effect. I am less sure of everything I think about, even questions which I once felt sure of the answer to. But as far as good writing goes, I have come up with a provisional answer, and although I don’t find it very satisfying, I thought I’d try it out here.

I will begin by saying that the question itself is misguided. That’s because there is no such thing as good writing–only good reading. When we ask the question “what makes good writing?” we’re actually looking through the wrong end of the telescope. A good reader, I submit, is able to read almost anything and be enriched by the experience. A good reader will read any text, be it a poem, essay, novel, or piece of non-fiction, and find connections to other works. Of course, this is not to say there is no such thing as bad writing–I think we all know that it does exist–but that is a different issue. Seeing examples of bad writing will help us understand what not to do, but it won’t really help creative writers learn what to do to create good writing, so once again, I think it’s best to turn the question on its head and focus on what makes a good reader rather than what makes good writing.

After all, it has to be far easier to learn the skills required to be a good reader than to learn to be a good writer. And there are all sorts of advantages for the good reader–not only personal and professional, but social and political, as well. I think I’ll have to ponder on this one for a week or two, however, before I begin to identify how to create good readers and what makes good reading. For now, though, I’ll end with the suggestion that the world would surely be a better place if there were more good readers in it. I’ll go even further and add that maybe we’d all better get to work to see how we can do our part to create good, solid readers, because good readers make good citizens, and we can surely use a great many more good citizens in our world right now.

Smith versus Shelley: A Tale of Two Poems

Yesterday, I co-led a poetry discussion group at one of the area retirement communities, something I’ve done for the last few years. It’s been a really interesting experience–there’s so much to learn and discuss about even mediocre poems, and I enjoy hearing the participants share their ideas about the poems, as well as the stories and memories these poems evoke.

I choose the poems at random, with very little rhyme (pardon the pun) or reason to my choice. One of the poems yesterday was Percy Bysshe Shelley’s “Ozymandias.” Yes, I proffered that old chestnut to the group, even though I’d read it thousands of times and have taught it in many classes. I just wanted another look at it, I guess, and it’s fun to do that with company. What I wasn’t expecting, however, was my co-leader bringing in another poem on the same exact topic, written at the same time.

It happens that Shelley had a friend, the prosaically named Horace Smith, and the two of them engaged in a sonnet writing contest, on the agreed-upon subject of Ancient Egypt and, presumably, Rameses II, also known as Ozymandias. We remember Shelley’s poem: every anthology of 19th-century British literature probably contains it. However, Smith’s sonnet is largely forgotten. In fact, I’ll offer a true confession here: despite having taught Brit lit for decades, I’d not heard of Smith’s version until a couple of days ago.

It turns out that Smith was himself an interesting fellow. He wrote poetry, but was not averse to making money, unlike his younger friend Shelley. Smith was a stock-broker, and made a good living, while also, according to Shelley, being very generous with it. He sounds like a generally good guy, to be honest, something which Shelley aspired to be, but was really not. For all intents and purposes, Shelley was a masterful poet but a real asshole on a personal level, and a bit of an idiot to boot. (What kind of a fool goes sailing in a boat that he didn’t know how to operate, in a storm, when he didn’t even know how to swim?) Smith knew how to make and keep friends as well as money, two things that Shelley was not very good at, by all accounts.

At any rate, I thought it might be interesting to compare the two poems. Of course, we assume Shelley’s poem will be better: it’s the one that is in every anthology of 19th-Century British literature, after all, while I–with a Ph.D. in the subject, for whatever that’s worth–didn’t even know of the existence of Smith’s poem until a few days ago. But maybe, just maybe, there’s something valuable in the stockbroker’s poem that has been missed–and wouldn’t that make a fine story in and of itself?

So here are the two poems, first Shelley’s, and then Smith’s.

Ozymandias (Shelley)

I met a traveller from an antique land,

Who said—“Two vast and trunkless legs of stone

Stand in the desert. . . . Near them, on the sand,

Half sunk a shattered visage lies, whose frown,

And wrinkled lip, and sneer of cold command,

Tell that its sculptor well those passions read

Which yet survive, stamped on these lifeless things,

The hand that mocked them, and the heart that fed;

And on the pedestal, these words appear:

My name is Ozymandias, King of Kings;

Look on my Works, ye Mighty, and despair!

Nothing beside remains. Round the decay

Of that colossal Wreck, boundless and bare

The lone and level sands stretch far away.”

Ozymandias (Smith)

In Egypt’s sandy silence, all alone,
Stands a gigantic Leg, which far off throws
The only shadow that the Desert knows:—
“I am great OZYMANDIAS,” saith the stone,
“The King of Kings; this mighty City shows
The wonders of my hand.”— The City’s gone,—
Naught but the Leg remaining to disclose
The site of this forgotten Babylon.

We wonder — and some Hunter may express
Wonder like ours, when thro’ the wilderness
Where London stood, holding the Wolf in chace,
He meets some fragment huge, and stops to guess
What powerful but unrecorded race
Once dwelt in that annihilated place.

Now, I’d say Shelley definitely has the advantage in terms of poetic language, as well as the narrative situation. His words are sibilant and flowing, and it’s a stroke of genius to make the story come from not the speaker of the poem, but from a traveler from an antique land; it makes the scene seem even more authentic. The alliteration in the last two lines (“boundless” and “bare” as well as “lone” and “level”) is a deft touch as well.

I’d also say that Shelley’s choice of the half shattered face is much better than Smith’s. There’s something much more poetic about a sneering face, even if it’s a half of a face, than a gigantic leg. There’s no way on earth Smith could have made a gigantic leg sound poetic, and that hampers the poetic feel of his sonnet, which is a bit of a shame.

Or is it?

Perhaps Smith wasn’t going for poetic feel here at all. In fact, I’d argue that he definitely wasn’t thinking along the same lines Shelley was. There are obvious similarities between the two poems. We still get the empty site, the desolation of the “forgotten Babylon” that powers so much of Shelley’s version, but it turns out that Smith is interested in something completely different. Where Shelley’s poem comments on the nature of arrogance, a human pride that ends in an ironic fall, Smith’s presents the reader with a different kind of irony. His version is much grander. In fact, it’s a cosmic irony that Smith is grappling with here, as the poem comments on the inevitable rise and fall of human civilization. What I find astounding is that in 1818, just as England was beginning its climb up to the pinnacle of world dominance for the next two centuries, Smith was able to imagine a time when the world he knew would be in tatters, with nothing remaining of the biggest city on earth, save as a hunting ground for the presumably savage descendants of stockbrokers like himself. Smith’s imagination was far more encompassing that Shelley’s, given this kind of projection into the far future.

All told, Shelley’s poem is probably the better one: it’s more quotable, after all, and no matter how much I love Smith’s message and projection into the future, he just doesn’t have the choice of words and rhythm that Shelley does. But need we really limit ourself to just one of these poems, anyway? I’d say we’ve gleaned about as much as we can from Shelley’s “Ozymandias.” Perhaps ours is an age in which we can appreciate Smith’s vision of a far distant future. Empires rise and fall, waters ebb and flow, and civilizations come and go. Smith, with his Hunter coursing through what was once London, paints this idea just as well as Shelley does with his decayed Wreck. There’s room for both of these poems in our literary canon.

How We Got Here: A Theory

The United States is a mess right now. Beset by a corrupt president and his corporate cronies, plagued by a — um — plague, Americans are experiencing an attack on democracy from within. So just how did we get to this point in history?

I’ve given it a bit of thought, and I’ve come up with a theory. Like many theories, it’s built on a certain amount of critical observation and a large degree of personal experience. Marry those things to each other, and you can often explain even the most puzzling enigmas. Here, then, is my stab at explaining how American society became so divisive that agreement on any political topic has become virtually impossible, leaving a vaccuum so large and so empty that corruption and the will to power can ensure political victory.

I maintain that this ideological binarism in the United States is caused by two things: prejudice (racism has, in many ways, always determined our political reality), and lack of critical thinking skills (how else could so many people fail to see Trump for what he really is and what he really represents?) Both of these problems result from poor education. For example, prejudice certainly exists in all societies, but the job of a proper education in a free society is to eradicate, or at least to combat, prejudice and flawed beliefs. Similarly, critical thinking skills, while amorphous and hard to define, can be acquired through years of education, whether by conducting experiements in chemistry lab or by explicating Shakespeare’s sonnets. It follows, then, that something must be radically wrong with our educational system for close to half of the population of the United States to be fooled into thinking that Donald Trump can actually be good for this country, much less for the world at large.

In short, there has always been a possibility that a monster like Trump would appear on the political scene. Education should have saved us from having to watch him for the last four years, and the last month in particular, as he tried to dismantle our democracy. Yet it didn’t. So the question we have to ask is this: Where does the failure in education lie?

The trendy answer would be that this failure is a feature, not a bug, in American education, which was always designed to mis-educate the population in order to make it more pliable, more willing to follow demogogues such as Trump. But I’m not satisfied with this answer. It’s too easy, and more important, it doesn’t help us get back on track by addressing the failure (if that’s even possible at this point). So I kept searching for an explanation.

I’ve come up with the following premises. First, the divisions in the country are caused by a lack of shared values–this much is clear. For nearly half the American people, Trump is the apotheosis of greedy egotism, a malignant narcissist who is willing to betray, even to destroy, his country in order to get what he wants, so that he can “win” at the system. For the other half, Trump is a breath of fresh air, a non-politician who was willing to stride into the morass of Washington in order to clean it up and set American business back on its feet. These two factions will never be able to agree–not on the subject of Trump, and very likely, not on any other subject of importance to Americans.

It follows that these two views are irreconcilable precisely because they reflect a dichotomy in values. Values are the intrinsic beliefs that an individual holds about what’s right and wrong; when those beliefs are shared by a large enough group, they become an ethical system. Ethics, the shared sense of right and wrong, seems to be important in a society; as we watch ours disintegrate, we can see that without a sense of ethics, society splinters into factions. Other countries teach ethics as a required subject in high school classes; in the United States, however, only philosophy majors in universities ever take classes on ethics. Most Americans, we might once have said, don’t need such classes, since they experience their ethics every day. If that ever was true, it certainly isn’t so any more.

Yet I would argue that Americans used to have an ethical belief system. We certainly didn’t live up to it, and it was flawed in many ways, but it did exist, and that’s very different from having no ethical system at all. It makes sense to postulate that some time back around the turn of the 21st century, ethics began to disappear from society. I’m not saying that people became unethical, but rather that ethics ceased to matter, and as it faded away, it ceased to exist as a kind of social glue that could hold Americans together.

I think I know how this happened, but be warned: my view is pretty far-fetched. Here goes. Back in the 1970s and 1980s, literary theory poached upon the realm of philosophy, resulting in a collection of theories that insisted a literary text could be read in any number of ways, and that no single reading of a text was the authoritative one. This kind of reading and interpretation amounted to an attack on the authority of the writer and the dominant ideology that produced him or her, as it destabilized the way texts were written, read, and understood. I now see that just as the text became destabilized with this new way of reading, so did everything else. In other words, if an English professor could argue that Shakespeare didn’t belong in the literary canon any longer, that all texts are equally valid and valuable (I’ve argued this myself at times), the result is an attack not only on authority (which was the intention), but also on communality, by which I mean society’s shared sense of what it values, whether it’s Hamlet or Gilligan’s Island. This splintering of values was exacerbated by the advent of cable television and internet music sources; no one was watching or listening to the same things any more, and it became increasingly harder to find any shared ideological place to begin discussions. In other words, the flip side of diversity and multiplicity–noble goals in and of themselves–is a dark one, and now, forty years on, we are witnessing the social danger inherent in dismantling not only the canon, but any system of judgment to assess its contents as well.

Here’s a personal illustration. A couple of years ago, I taught a college Shakespeare class, and on a whim I asked my students to help me define characters from Coriolanus using Dungeons and Dragons character alignment patterns. It was the kind of exercise that would have been a smashing success in my earlier teaching career, the very thing that garnered me three teaching awards within five years. But this time it didn’t work. No one was watching the same television shows, reading the same books, or remembering the same historical events, and so there was no way to come up with good examples that worked for the entire class to illustrate character types. I began to see then that a splintered society might be freeing, but at what cost if we had ceased to be able to communicate effectively?

It’s not a huge leap to get from that Shakespeare class to the fragmentation of a political ideology that leaves, in the wreckage it’s produced, the door wide open to oligarchy, kleptocracy, and fascism. There are doubtless many things to blame, but surely one of them is the kind of socially irresponsible literary theory that we played around with back in the 1980s. I distinctly remember one theorist saying something to the effect that no one has ever been shot for being a deconstructionist, and while that may be true, it is not to say that deconstructionist theory, or any kind of theory that regards its work as mere play, is safe for the society it inhabits. Indeed, we may well be witnessing how very dangerous unprincipled theoretical play can turn out to be, even decades after it has held sway.

Convent-ional Trends in Film and Television

Lately I’ve been spending quite a bit of time with the Aged Parent , and one thing we do together–something we’ve rarely done before–is watch television shows. My mother, deep in the throes of dementia, perks up when she sees Matt Dillon and Festus ride over the Kansas (it is Kansas, isn’t it?) plains to catch bad guys and rescue the disempowered from their clutches. Daytime cable television is filled with Westerns, and I find this fascinating, although I’ve never been a fan of them in the past. Part of my new-found fascination is undoubtedly inspired by Professor Heather Cox Richardson’s theory–presented in her online lectures as well as her Substack newsletter–that the United States’s fascination with the Western genre has a lot to do with the libertarian, every-man-for-himself ideal most Westerns present. I think she’s got a point, but I don’t think that this alone explains our fascination with Westerns. This, however, is an argument I’ll have to return to at a later date, because in this blog post, what I want to talk about is nuns.

Yes–that’s right–Catholic nuns. What was going on in the 1950s and ’60s that made the figure of the young, attractive nun so prevalent in films and television? Here, for example, is a short list of the movies that feature nuns from the 1960s:

  1. The Nun’s Story (1959) with Audrey Hepburn
  2. The Nun and the Sergeant (1962), itself a remake of Heaven Knows, Mr. Allison (1957)
  3. Lilies of the Field (1963) with Sidney Poitier
  4. The Sound of Music (1965), no comment needed
  5. The Singing Nun (1966) starring Debbie Reynolds
  6. The Trouble with Angels (1966) with Rosalind Russsell and Hayley Mills
  7. Where Angels Go, Trouble Follows (1968), the sequel to #6
  8. Change of Habit (1969), starring the strangely matched Mary Tyler Moore and Elvis Presley (!)

The fascination with nuns even bled over into television, with the series The Flying Nun (1967-1970), starring a post-Gidget Sally Field. This show, with its ridiculous premise of a nun who can fly, seems to have ended the fascination with nuns, or perhaps its bald stupidity simply killed it outright. From 1970 until 1992, when Sister Act appeared, there seemed to be a lull in American movies featuring nuns. Incidentally, the films I’ve mentioned here all feature saccharine-sweet characters and simple plots; in a typically American fashion, many of the difficult questions and problems involved in choosing a cloistered life are elided or simply ignored. There are, however, other movies featuring nuns that are not so wholesome; Wikipedia actually has a page devoted to what it terms “Nunsploitation.” These films, mostly foreign, seem more troubling and edgier. I leave an analysis of such films to another blogger, however, because what I really want to investigate is this: why was American culture so enamored, for the space of a decade, with nuns and convent life? I’ve argued previously that popular culture performs the critical task of reflecting and representing dominant ideologies, so my question goes deeper than just asking, “Hey, what’s with all these nuns?” Rather, it seeks to examine what conditions caused this repetitive obsession about nuns in a country that prided itself on the distance between religion and politics and, at least superfiically, religion’s exclusion from American ideology.

I have some ideas, but nothing that could be hammered together neatly enough to call a theory to explain this obsession, and so I will be looking to my readers to provide additional explanations. Surely the box-office success of films starring Audrey Hepburn, Debbie Reynolds, Sidney Poitier, and Julie Andrews count for something: Hollywood has always been a fan of the old “if it worked once, it should work again” creative strategy. But I think this might be too simple an explanation. I’ll have another go: perhaps in an era when women were beginning to explore avenues to power, self-expression, and sexual freedom, the image of a contained and circumscribed nun was a comfort to the conservative forces in American society. It’s just possible that these nuns’ stories were a representation of the desire to keep women locked up, contained, and submissive. On the other hand, the image of the nun could be just the opposite, one in which women’s struggle for independence and self-actualization was most starkly rendered by showing religious women asserting their will despite all the odds against them.

I think it’s quite possible that both these explanations, contradictory as they seem, might be correct. Certainly the depiction of women who submit to being controlled and defined by religion presents a comforting image of a hierarchical past to an audience that fears not only the future but the present as well (we should remember that the world was experiencing profoundly threatening social and political upheaval in the late 1960s). Yet at the same time, the struggle many of these nun-characters undergo in these films might well be representative of non-religious women’s search for meaning, independence, and agency in their own lives.

As I said, I have more questions than answers, and I will end this post with an obvious one: what effect did these films have on the general public? We’ve briefly explored the idea of where such movies came from and what they represent in the American ideology that produced them, but what did they do to their audiences? Was there any increase in teenage girls joining convents in the 1970s, after these films played in theatres and later, on television? What did the religious orders themselves have to say about such films? I’d be interested in learning the answers to these questions, so readers, if you have any ideas, or if you just want to compare notes and share your impressions, please feel free to comment!

How the Study of Literature Could Save Democracy

Beowulf MS, picture from Wikipedia

Usually, I am not one to make grand claims for my discipline. There was a time, back when I was a young graduate student in the 1980s, that I would have; perhaps even more recently, I might have argued that understanding ideology through literary theory and criticism is essential to understanding current events and the conditions we live in. But I no longer believe that.

Perhaps in saying this publicly, I’m risking some sort of banishment from academia. Maybe I will have to undergo a ritual in which I am formally cashiered, like some kind of academic Alfred Dreyfus, although instead of having my sword broken in half and my military braids ripped to shreds, I will have my diploma yanked from my hands and trampled on the ground before my somber eyes. Yet unlike Dreyfus, I will have deserved such treatment, because I am in fact disloyal to my training: I don’t believe literary theory can save the world. I don’t think it’s necessary that we have more papers and books on esoteric subjects, nor do I think it’s realistic or useful for academics to participate in a market system in which the research they produce becomes a commodity in their quest for jobs, promotions, or grant opportunities. In this sense, I suppose I am indeed a traitor.

But recently I have realized, with the help of my friend and former student (thanks, Cari!), that literature classes are still important. In fact, I think studying literature can help save our way of life. You just have to look at it this way: it’s not the abstruse academic research that can save us, but rather the garden-variety study of literature that can prove essential to preserving democracy. Let me explain how.

I’ll begin, as any good scholar should, by pointing out the obvious. We are in a bad place in terms of political discourse–it doesn’t take a scholar to see that. Polarizing views have separated Americans into two discrete camps with very little chance of crossing the aisle to negotiate or compromise. Most people are unwilling to test their beliefs, for example, preferring to cling to them even in the face of contradictory evidence. As social psychologists Elliot Aronson and Carol Tavris point out in a recent article in The Atlantic, “human beings are deeply unwilling to change their minds. And when the facts clash with their preexisting convictions, some people would sooner jeopardize their health and everyone else’s than accept new information or admit to being wrong.” They use the term “cognitive dissonance,” which means the sense of disorientation and even discomfort one feels when considering two opposing viewpoints, to explain why it is so hard for people to change their ideas.

To those of us who study literature, the term “cognitive dissonance” may be new, but the concept certainly is not. F. Scott Fitzgerald writes, in an essay which is largely forgotten except for this sentence, “the test of a first-rate intelligence is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function” (“The Crack-Up,Esquire Magazine, February 1936). In addition, cognitive dissonance isn’t that far removed from an idea expressed by John Keats in a letter he wrote to his brothers back in 1817. He invents the term “Negative Capability” to describe the ability to remain in a liminal state of doubt and uncertainty without being driven to come to any conclusion and definitive belief. Negative capability, in other words, is the capacity to be flexible in our beliefs, to be capable of changing our minds.

I believe that the American public needs to develop negative capability, lots of it, and quickly, if we are to save our democracy.

But there’s a huge problem. Both Fitzgerald and Keats believe that this function is reserved only for geniuses. In their view, a person is born with this talent for tolerating cognitive dissonance: you either have it–in which case you are incredibly gifted–or you don’t. In contrast, Aronson and Tavris clearly believe it’s possible to develop a tolerance for cognitive dissonance: “Although it’s difficult, changing our minds is not impossible. The challenge is to find a way to live with uncertainty…” While their belief in our ability to tolerate cognitive dissonance and to learn from it is encouraging, it is sobering that they do not provide a clear path toward fostering this tolerance.

So here’s where the study of literature comes in. In a good English class, when we study a text, whether it’s To Kill a Mockingbird or Beowulf, students and teacher meet as more or less equals over the work of literature in an effort to find its meaning and its relevance. Certainly the teacher has more experience and knowledge, but this doesn’t–or shouldn’t–change the dynamic of the class: we are all partners in discovering what the text has to say in general, and to us, specifically. That is our task. In the course of this task, different ideas will be presented. Some interpretations will be rejected; some will be accepted. Some will be rejected, only to be later accepted, even after the space of years (see below for an example).

If we do it well, we will reach a point in the discussion where we consider several differrent suggestions and possibilities for interpretation. This is the moment during which we become experts in cognitive dissonance, as we relish interpretive uncertainty, examining each shiny new idea and interpretation with the delight of a child holding up gorgeously colored beads to the light. We may put a bead down, but it is only to take up another, different one–and we may well take up the discarded bead only to play with it some more.

The thing that makes the study of literature so important in this process is that it isn’t really all that important in the grand scheme of things. To my knowledge, no one has ever been shot for their interpretation of Hamlet; the preservation of life and limb does not hang on an precise explanation of Paradise Lost. If we use the study of literature as a classroom designed to increase our capacity for cognitive dissonance, in other words, we can dissipate the highly charged atmosphere that makes changing our minds so difficult. And once we get used to the process, when we know what it’s like to experience cognitive dissonance, it will be easier to for us to tolerate it in other parts of our lives, even in the sphere of public policy and politics.

If I seem to be writing with conviction (no cognitive dissonance here!), it’s because I have often experienced this negative capability in real time. I will give just two examples. The first one occurred during a class on mystery fiction, when we were discussing the role of gossip in detective novels, which then devolved into a discussion on the ethics of gossip. The class disagreed violently about whether gossip could be seen as good or neutral, or whether it was always bad. A loud (and I mean loud!) discussion ensued, with such force that a janitor felt compelled to pop his head into the classroom–something that I had never seen happen either before or since then–to ask if everything was ok. While other teachers might have felt that they had lost control of the classroom, I, perversely, believe that this might have been my most successful teaching moment ever. That so many students felt safe enough to weigh in, to argue and debate passionately about something that had so little real importance suggested to me that we were exercising and developing new critical aptitudes. Some of us, I believe, changed our minds as a result of that discussion. At the very least, I think many of us saw the topic in a different way than we had to begin with. This, of course, is the result of experiencing cognitive dissonance.

My second example is similar. At the end of one very successful course on Ernest Hemingway, my class and I adjourned for the semester to meet at a local bar, at which we continued our discussion about The Sun Also Rises. My student Cari and I got into a very heated discussion about whether the novel could be seen as a pilgrimage story. Cari said it was ; I vehemently disagreed. The argument was fierce and invigorating–so invigorating, as a matter of fact, that at one point a server came to inquire whether there was something wrong, and then a neighboring table began to take sides in the debate. (For the record, I live in Hemingway country, and everyone here has an opinion about him and his works.) Cari and I left the bar firmly ensconced in our own points of view, but a couple of years ago–some three years after the original argument occurred–I came to see it from Cari’s point of view, and I now agree with her that The Sun Also Rises can be seen as a sort of pilgrimage tale. It took a while, but I was able to change my mind.

It is this capacity to change one’s mind, I will argue, that is important, indeed, indispensable, for the democratic process to thrive.

In the end, it may well be that the chief contribution that good teachers of literature make to culture is this: we provide a safe and accessible place for people to learn what cognitive dissonance feels like, and in doing so, we can help them acquire a tolerance for it. This tolerance, in turn, leads to an increase in the ability to participate in civil discourse, which is itself the bedrock of democratic thought and process. In other words, you can invest in STEAM classes all you want, but if you really want to make people good citizens, do not forget about literature courses.

In view of this discovery of mine, I feel it’s my duty to host a noncredit literature class of sorts in the fall, a discussion-type newsletter that covers the great works of English literature–whatever that means–from Beowulf to the early Romantic period, in which discussion is paramount. If you’re interested or have suggestions, please let me know by commenting or messaging me, and I’ll do my best to keep you in the loop.

And in the meantime, keep your minds open! Cognitive dissonance, uncomfortable as it is, may just be what will keep democracy alive in the critical days to come.

Choosing Optimism

Photo credit: Daniel Shumway

I haven’t been writing much lately, even though Heaven knows I have the time for it these days. I suppose the main reason is because I haven’t had anything positive to say for a couple of weeks. The political outlook, as well as the growing realization that social distancing will become the new norm for the next three to five years, has taken its toll on my usual optimism.

Having said that, I have to add that I must be the most cautious optimist who ever walked the earth. Several years ago, when my mother was facing a fairly dire medical diagnosis, I told my daughter that until there was definitive proof of it, I would continue to hope for the best. Granted, this was a conscious choice on my part; like everyone else, I can always see the worst possibilities, but on this occasion, I had deliberately decided not to panic. “After all,” I added, “I have absolutely nothing to lose by being an optimist.” Immediately after the words came out of my mouth, I started to laugh; I could not think of a more pessimistic way of expressing my optimism. It’s almost as if I was some mashup of Ernie and Bert, of Winnie the Pooh and Eeyore, existing in the same body at the same time.

(Incidentally, I turned out to be right: my mother was misdiagnosed and recovered, but not before a young doctor, visiting her in her hospital room on his rounds, said to her, “You’re doing so much better! And you’re looking very good for a woman who is 70 years old.” My mother smiled and replied, “Thank you! Actually, I’m 80 years old.” He checked her chart and nodded. “Yes, so you are. Well, you’re looking quite good, aren’t you!” It must have cost my mother a bit to have answered him in that way, because she’s self conscious about her age, but I assume the temptation to put the young doc in his place was simply too great for her to resist.)

This is simply a long-winded way of saying that I often don’t write for this blog unless I’m either outraged or optimistic, and I’ve been neither for the past week or so. But now I think I have something good, something positive, to offer my readers–whoever you may be. It’s not entirely good, but it’s a sunny day today, after several days of wintry weather, and for the moment, at least, I’m able to see some bright spots in our landscape.

It comes in a bad news/good news package. So, here’s the bad news: we’ve tanked our economy, globally, because of Covid-19, trashing productivity, jeopardizing livelihoods, causing mass unemployment. And now, here’s the good news: we’ve tanked our economy, globally, because of Covid-19. How is that good? Think of it this way: whatever happens from here on out, we should never forget that we were willing to sacrifice a great deal, perhaps as much as any generation has ever sacrificed in so short a time, not for a war, but to protect segments of our population that we might ordinarily never even consider: the aged, the infirm, the immunocompromised. This is remarkable–so remarkable, in fact, that we might think this kind of altruism has never happened before in the history of humankind.

But if we did think this, we’d be wrong, because it has. Over and over again.

The anthropologist Margaret Mead once said she considered the earliest sign of civilization to be a healed femur, because it demonstrated that compassion and caring existed within a society, since it takes at least six weeks for the thighbone to heal, and during that time the injured person would be totally dependent on others for his or her survival. And, despite our modern tendency to believe, along with Thomas Hobbes, that life in a natural state must be “nasty, brutish, and short,” we are gaining more and more evidence of the existence of compassion in prehistoric human societies. For example, anthropologists have discovered that Neanderthals cared for injured people, nursing them into old age–and this despite other infirmities that would have precluded their useful contributions to the group.

Like many other people, I’ve been taught that nature was a rough business, and that only the fittest survive. Americans especially have been nurtured on that old chestnut, it seems, even before Darwin’s theories were misappropriated and twisted to create Social Darwinism. We’ve been taught to see the world in this way because it fits our view of ourselves as “rugged individuals” who conquer the environment and make their own destiny. But the era in which this view has held sway is about to end, I hope, and we have Covid-19 to thank for its demise.

One thing we have to understand is that, Hollywood blockbusters and dystopian fiction notwithstanding, disasters don’t always bring out the worst in people; in fact, much of the time, they bring out the very best in humans, as many theorists have pointed out. At least in the early stages of disasters, people tend to act rationally and altruistically. In the last two months, many of us have seen heroic and caring actions performed by people in our neighborhoods and communities. It’s these things we need to focus on, I’d argue, hard as it may be when we are supplied with a never-ending supply of fear and anxiety.

Don’t get me wrong — I’m both afraid and anxious. I should, to be honest, add a few more adjectives to the mix: terrified, frustrated, angry, sad, antsy, hysterical. But I am learning to fight against the media, and perhaps my own nature, which has learned to feed on bad news and fear. In fact, this blog post is just my way of sharing my most recent discovery about the way we live now: We have been spoon-fed bad news for so long now that we are addicted to it. Like the teenager who loves to ride the scariest roller coasters or watch the most terrifying horror flicks, we want to scare ourselves with stories of the disasters that lie ahead of us, of tragedies waiting to jump out at us. Fear, it turns out, is just as thrilling in a news report as it is in a terrifying ride which we cannot get off of. I will leave it to another blogger, or to my readers (please comment below!), to explain why fear is so compelling and addictive. My point for now is that many of us cannot do without such fear; it has become, in the last ten years especially, part of the fabric of our lives now.

But it is dangerous to give in to our addiction to fear in the form of news reports and dire projections about the future, for at least two reasons. First, such reports and predictions may be wrong. Media reporting of human behavior in disasters often is wrong, concentrating on the bad rather than the good. Murder and mayhem sells: “if it bleeds, it leads,” according to an old journalistic saw. Second, these dark views, in addition to their potential inaccuracy, feed our desire for the negative, which I’d argue exists in all of us, even the most optimistic of us. If we think of this desire as an addiction, perhaps we can begin to see the danger of it and wean ourselves off of our negative viewpoints. We may not be more productive (and the very nature of productivity will be questioned and redefined in the coming years, I’d guess), but we may be happier, more satisfied, and ready to work hard to create a better world than the one that lies in shambles around us. After all, we have nothing to lose by being optimists about the future.

Of course, the challenges that face us are enormous, perhaps greater than any other generation has faced. And I don’t always feel optimistic about the likelihood that we can change things substantially. But I know that change is possible, although admittedly it sometimes comes at a great cost. And I know as well that in order to create necessary changes, the work must start well before they actually occur, sometimes centuries before. In other words, we must often imagine the possibility for change long before we can expect to effect it. (This kind of imagining, after all, is exactly what Virginia Woolf does so beautifully at the end of A Room of One’s Own in regard to women’s writing.) In other words, incremental change is likely just as valuable as actual change, though it is often invisible, swimming just below the surface of current events. Without it, real change could never occur.

So I will just end by referring you to the last scene of Charlie Chaplin’s masterpiece of satire, The Great Dictator (1940, though begun in 1937), in which he makes fun of Adolf Hitler and Nazism. Chaplin reportedly ad-libbed this speech he gave as the Hitler lookalike, which is perhaps why it rings as true now as it did 70 years ago, when the world was facing another catastrophe, one which it survived and continues to learn from to this day. Take a look at it and see if it makes you feel just a little bit better as you face the future that lies ahead.

Inappropriate Songs from the Past (brought to you by SiriusXM Radio)

BennyGoodmanandBandStageDoorCanteen
By Film screenshot – Stage Door Canteen film, Public Domain, https://commons.wikimedia.org/w/index.php?curid=1194330

I am a fairly devoted listener of ’40s Junction, a channel on Sirius Radio that plays songs from the 1940s–except when they don’t. (To my lasting fury and frustration, I discovered this year that the station ceases its normal operations on November 1 and, for the next six weeks, plays “holiday music.” Now, I don’t know if the program managers decided that the people who listen to 1940s music are the same ones who enjoy endless Christmas carols, but if anyone from Sirius is reading this, here’s a hint: they aren’t.) One thing I’ve found out by listening to ’40s Junction is that  if one listens long enough, one can discover some real gems. I mean, we all know “Blues in the Night,” “I’ll Be Seeing You,” “Don’t Fence Me In,” and “String of Pearls,” but occasionally this station plays some songs I’ve never heard of, despite living in a certifiable time warp for my entire life. And so today I’m taking a break from politics and pessimism to discuss four of these little oddities from the past, complete with YouTube links, so that you can listen to them and judge for yourself. Above all, I’m curious about my readers’ reactions to these songs, so please do leave your comments on some or all of these songs.

I will start out with a song that has a catchy rhythm and some very interesting lyrics: “The Lady from 29 Palms.” It seems to have been recorded in 1946 or so, and there is an interesting reference to the explosive attraction of the lady in question, who is compared to “a load of atom bombs,” which, coming so soon after the bombing of Hiroshima and Nagasaki, is highly insensitive, to say the least. Yet it has a great sax line in the beginning, and with its really clever rhymes, I’d say this is a song that’s worth listening to.

Next on my list is a very odd little song that shocked me when I first heard it. Unlike “The Lady from 29 Palms,” “Who’s Yahoodi” is famous enough to have its own Wikipedia entry, which certainly  bears checking out. Hop over there, and you can read about the song’s origins on the Bob Hope Radio Program, when announcer Jerry Colonna got tickled as he introduced one of Hope’s guests, the young violin prodigy Yehudi Menuhin. Colonna made fun of the name, continued his joke through later programs, and eventually, it became a popular meme, although memes hadn’t been invented yet.  In 1940, songwriters Bill Seckler and Matt Dennis made a song out of it. The U.S. Navy also got some mileage out of the situation, naming one of its research programs “Project Yehudi.” I’ve linked to the Cab Calloway version, but there are several other versions, including an astoundingly antisemitic one (I know–the whole thing’s kind of antisemitic, but this version is really takes the cake). As with “The Lady from 29 Palms,” the song itself is catchy, with the kind of finger-snapping rhythm that makes tunes from this era so appealing. In addition, the song’s many references to secretive, spooky people who are there, but not there remind me of those Dr. Who beings, the Silence, who watch and influence human events, but are never seen by humans. Added to these odd but intriguing lyrics, there’s some enjoyable big-band music, with the necessary saxophone solos and brass rhythms creating a memorable song. How it disappeared is a mystery–unless, of course, the Silence got involved in the whole thing and wiped it from our collective memories.

The next two songs are about body-shaming. In a way, I’m surprised they found their way on the air at all in the present day, given the changing climate and hyper-awareness about body images we’ve seen in recent years. The first, “Lean Baby,” was recorded by Frank Sinatra and was very popular. But the Dinah Washington version is even better, so you should check that one out, too. Clever yet brutal lyrics make the song interesting, and once again the music is quite catchy. On the flip side of this “appreciation” of thinness is “Mr. Five-by-Five,” arguably the most successful of the songs I’ve mentioned here. At least seven singers have recorded versions of it, the most recent one in a 2013 movie (Gangster Squad), according to its Wikipedia entry. Here’s a version by Ella Mae Morse recorded in 1942. Again, there are some devilishly funny lyrics that, inappropriate as they are, make you laugh out loud–if you’re by yourself.

So, readers, what do you think about these four songs? Politically incorrect, a fascinating trip down memory lane, historical footnotes, or just oddities? I’m not sure what to make of them, but I am grateful that they are preserved, inappropriate or not, for us to listen to, consider, and critique them. So, thanks, Sirius Radio, for the memories–even if I have to put up with two months of Christmas music to get them.

 

Ecohats

Chief Petoskey sporting an Ecohat in Petoskey, Michigan.

 

We are facing an ecological emergency, and too little is being done to address the consequences of climate change. As individuals, our actions may seem inadequate. Yet every action, however small, can lead to something bigger. Change comes only as a result of collective will, and we can demonstrate that will by showing that we desire immediate political, social, and economic action in the face of global climate change.

Ecohats are not a solution, but they are a manifestation of will made public. Based on the pussyhat, a public display of support for women’s rights, Ecohats display support for immediate political action to address the need for systemic change to deal with climate change. They are easy to create; simply knit, crochet, or sew a hat in any shade of green to show your support for the people and organizations that are dedicated to addressing the issue of climate change, then wear it or display it with pride and dedication.

We can’t all be Greta Thunberg, but we can show our support for those people and organizations that are tirelessly working to address climate change, like  Citizens’ Climate Lobby, 350.org, Center for Biological Diversity, and others.

Please consider making, wearing, and displaying an Ecohat to show your support!

 

Ernest Hemingway is also wearing an Ecohat!

Against Cynicism

scout and the mob

 

Like many other people, I have been very worried about the direction we’re going–not just the United States, but the world in general. Populism–which in its most innocuous form is little more than cheering for the home team but which can be so much more insidious and damaging–is on the rise, and partisanship seems to have infiltrated many western governments, causing us to question the efficacy of democracy itself. Couple this with the imminence of climate change, and the future of human civilization seems dire indeed.

So I understand why many of us might live in a state of worry, of fearful suspense. I know how it feels to wake up each morning, wondering what new terrible thing has happened while I slept, and how it feels to wait impatiently for the slow-moving wheels of democratic rule to right themselves, and to hope for a period in which government works for its citizenry rather than for corporations and billionaires. I, too, have lost hope at times, allowing myself to be convinced that the struggle against injustice and oligarchy is fruitless; like so many other people, I have frequently succumbed to cynicism and inertia, telling myself that any action is doomed to failure.

But that attitude is wrong. I know that now. More importantly, I feel it’s my duty to lead a crusade against this type of cynicism, even if I do so all by myself.

If you’re reading this blog post, excellent. We can work together. On the other hand, if this essay slips unacknowledged into cyberspace, read by no one at all, so be it–that doesn’t matter. The struggle is important, and it must be waged, even if by only one person, and even if I myself am that person. What follows, then, is my small own contribution in the war on cynicism–my manifesto, if you like.

The time for idle anger is over. The time for pessimism is past. We do not have the luxury to sit back and watch knowingly as the world falls apart, nodding as we say, “Of course, we always knew it would be this way. The system is rigged.” Whether it is or is not rigged is beside the point. This is a question that can be debated by future historians, much like the question “How many angels can dance on the head of a pin?” Saints and scholars can debate such a question; they have that luxury. But we no longer have the luxury to debate whether government is or is not effective. We have to act, and we have to act now, if we want to save our way of life.

And actions start with beliefs, primarily the belief that when we act, our actions have effects. I believe–rather, I know, with certainty–that they do have effects. At the local level, our actions, indeed, our mere presence at meetings and councils do have an effect; I have seen this demonstrated in the past year in my role as a city councilmember. Government, at least at the most local level, works–but only if we work hard at it by electing the right people and by holding them accountable. In short, democracy is not compatible with either cynicism or complacency; yet throughout much of the last generation, our citizenry has been guilty of both these things.

Why? Because cynicism is easy. Cynicism is seductive. Cynicism is cool–so cool, in fact, that many voters in 2016 agreed that “draining the swamp” was the best thing going for a candidate who had no other ideas or attractions. So here is my advice: do not give in to the lure of cynicism. It leads nowhere but to the self, to an inflated view of one’s own cleverness and perception, to a self-satisfied egocentrism that congratulates itself on seeing the worst at all times, in all places. 

Close to the climax of To Kill a Mockingbird, Scout Finch prevents a lynching from happening merely by demonstrating her own naive lack of cynicism. As the crowd of angry white men encircles Atticus, who is guarding the innocent Tom Robinson in his cell, Scout does more than anyone else to quell the murderous mob and send it home. Her simple, naive words, her attempt to connect with Mr. Cunningham on a human and neighborly level, represent a belief in innate goodness and the power of community, and  it is just enough to disarm a group of angry men bent on taking the life of an innocent African-American man. (Click here to watch this pivotal scene; my apologies for the commercial at the beginning.)

That Tom Robinson ends up dying at the end of the novel for a crime he didn’t commit is part of To Kill a Mockingbird’s tragic impact. As readers, perhaps we can be cynical about that tragic message; but as actors, as characters in our own human drama and, most of all, as real-life community members, we cannot afford such cynicism. We must be like Scout if we want to survive.

Reader, I implore you: give up your cynicism. Today, I ask you to believe in something grander than your own cleverness in discovering the duplicity of others, and to act in good faith, even though no discernible good may come out of your actions in your lifetime. Be naive, if you have to. But say good-bye to cynicism today, this minute. I am certain the generation that comes after us will thank you for it.