A Modest Proposal, and an Example

Proposal:

I would like to use this platform to issue a call to action. I believe that contemporary culture has a desperate need, one which we can all work to address. For too long we have outsourced the intellectual and critical work that must be done to understand the world we live in. We’ve told ourselves that we cannot engage in really intellectual discussions, because such things lie exclusively in the province of the thinkers and researchers who live and work in the universities. We’ve allowed the creation of a huge silo, one which we never enter, in which important thinking and analysis occurs.

In my view, this is a big mistake.

Of course, we need universities and the scholars they produce and the research that they in turn produce. But that does not give us license to stop thinking ourselves, to stop considering the works and ideas that, whether we know it or not, affect our lives and help us make sense of the world. What I’m calling for, in other words and in the simplest terms, is the idea of the amateur intellectual.

It’s not such a crazy idea, when you think about it. I mean, we all know what an amateur detective is; half of mystery fiction wouldn’t exist without the likes of Miss Marple, Lord Peter Wimsey, Father Brown, and Jessica Fletcher. Why not have the same kind of category for thinkers, philosophers, literary critics, historians? Why must we delegate the important intellectual work to others, and not explore works and ideas on our own? One needn’t be a forensic expert to solve a mystery; likewise, one shouldn’t need an advanced degree to make sense of the ideas that swirl around us in this world we live in.

I offer an example below. It’s an idea I’ve had about Frankenstein. In my earlier days (read: before I retired), I’d probably have tried to create a more academic argument out of it, researching what other readers/scholars have said, scaffolding and buttressing my argument to make it airtight and worthy of an academic audience. But I’m done with all that now, and besides, I don’t think it’s right or necessary to let academics have all the fun. I also think it’s an abdication of our own intellectual responsibility, as I said above, to outsource or delegate all the work to a bunch of overworked and myopic academics. We’re all capable of this kind of intellectual play (I refuse to call it “work”), although it may take more practice for some of us to get to the point where we feel comfortable making judgments for ourselves and, going a step further, sharing them with others.

Example:

I will be honest: Frankenstein is not my favorite book. Far from it, and for a number of reasons. Of course, like everyone else in the English-speaking world, perhaps in the entire world, I acknowledge how important the themes of the novel have become, how the ideas of runaway science, intellectual hubris, and regret or guilt about the act of creation itself have become central metaphors of the world we live in. I get that these things are important. I can also see that Frankenstein might lay claim to being the first science fiction novel in English (although there are other claimants), and like everyone else, I am amazed that it was written by a 19-year-old woman, albeit one with illustrious intellects for parents. I love the fact (forgive me), that the self-important and perfidious Percy Shelley has been all but eclipsed by his young wife’s audacity in writing such an important work. I mean, every schoolchild today at least knows about Frankenstein; how many know “To a Skylark” or “Ode to the West Wind” or even “Ozymandias”?

All these things about the novel are important, and they are things I have long appreciated about it. But the novel itself is, to be frank, a mess. It looks back to the amorphous 18th-century English novel much more than it looks forward to the 19th-century novel we all associate with the very form of the novel itself. It’s episodic, for one thing. For another, it’s a hodgepodge in terms of its form. Starting out as a letter, it becomes the first-person narrative of Victor Frankenstein, then it somehow becomes the first-person narrative of the monster itself (never mind how the monster learned not only to speak and understand language without any kind of instruction but actually write as well), then returns to doctor’s narrative. It is, to be brief, all over the place in terms of narrative structure.

Yet whenever I referred to the novel–I don’t think I ever had the temerity to actually put it on the syllabus of any of the classes I taught–I always did so with respect. As I said above, I understand the place it has come to occupy in our culture. It has become a central metaphor, much like the story of Darmok and Jalad at Talagra in that Star Trek: The Next Generation episode (second episode of the fifth season) became to the Tamarians. But I would never argue that the novel itself was a stellar example of artistic creation.

Until yesterday, that is, when I had an epiphany.

I don’t know if Mary Wollstonecraft Shelley knew what she was doing, and it hardly matters if she did, but I can now see that the novel’s very structure, that patchwork, rag-tag narrative that bounces from one story to another without any proper cohesion, is exactly what Dr. Frankenstein’s monster is. Just as the monster is a wild assortment of body parts, taken from various villains and paupers whose cadavers Victor Frankenstein managed to appropriate, so the the novel is a wild assortment of stories, sewn together into a loose and rather ugly novel that lacks the grace of any work by Jane Austen or Sir Walter Scott, Shelley’s contemporaries. In other words, Frankenstein’s very structure mirrors the monstrousness of its central character.

And now suddenly I no longer see the novel as one with an important message but a disastrous execution. Rather, I see it as just plain brilliant. Did Shelley see or know what she was doing? I doubt she did; I think it might have been good luck. But it doesn’t really matter. It’s an amazing achievement, whether intentional or not, and is, at the moment, my favorite aspect of the book, and a discovery so thrilling that it makes me wish I was still teaching so I could share it with my students.

And here’s where I return to my proposal: why on earth should we amateurs–those of us not connected to a university or research institution–deny ourselves the pleasure of a discovery like this? If we can read and think, we can do the work/engage in the play. And our lives will be so much richer for doing it.

Amateur intellectuals, unite! It’s time to take up our thinking caps and enter the fray! Or as I would rather put it, let’s enter the playground and begin to have some fun for ourselves. I look forward to seeing your stories of discovery and interpretation.

Austen and Ozu: A Comparison

One of the things that gets me through a long winter (one which hasn’t had much snow, or cold, but still lacks light and is therefore depressing), is a subscription to a streaming service called the Criterion Collection. It airs a variety of films, cycling some of them in and out month by month to add a bit of suspense to their line-up. Mostly, I use it to watch Hollywood films from the 1930s and ’40s, occasionally later, but this month subscribers have been treated to a collection of films directed by Yasujiro Ozu.

I never learned much about films until fairly recently. I mean, I knew I liked the earlier films of Alfred Hitchcock much better than his later ones, and that I’d always used film as a way to experience not only American culture but my own family history. But about fifteen years ago, when I got the approval to teach a college class on Hitchcock, I realized I should know something more than my own idiosyncratic likes and dislikes, and so I set to work educating myself on the history of film.

I did a haphazard job. That much is clear to me now, because while I watched films like L’Aventurra and Bicycle Thieves, I missed the films of Yasujiro Ozu. This week, however, I’ve begun to remedy that situation, and I have to say that several days after watching Late Spring (1949), I am still thinking about the film. This morning, it came to me in a flash of inspiration that one of the reasons I love it so much is that the film bears a marked similarity to the work of Jane Austen.

I don’t mean to say that Ozu modeled the film on Austen’s work or her technique; far from it. He may not even have been aware of her existence. But it seems to me that Late Spring, with its very slim plot (a woman who lives with her aging father is urged to marry but doesn’t want to), its focus on manners, its subtle but striking attention to setting–all these things are very similar to any Jane Austen novel–let’s say, for convenience, Pride and Prejudice. There’s a quiet understatement at work in both works, which somehow demands an emotional and an intellectual investment on the part of the viewer/reader. In addition, Ozu, like Austen, focuses on very simple, quotidian events; nothing major or earth-shattering happens in either work. Rather, life as it is represented is made up of fairly unexceptional moments, for the most part, and only occasionally punctuated by significant and revealing scenes, so subtly drawn that their significance might initially escape unnoticed.

Watching the film left me feeling of peaceful and satisfied. It evoked the sense of a world that is balanced and somehow justified in its simplicity. This, despite the fact that the world Ozu depicts is clearly putting itself back together after the horrors of World War II. I get the same feeling from reading Austen’s works. True, there are horrible things happening in the world she creates–the Napoleonic Wars, the festering slave trade in the Caribbean, her characters’ aristocratic lives that are built on the suffering of a hideously deprived underclass–and yet, while not specifically trying to hide such things, Austen’s focus, like Ozu’s, remains centered on the small, daily events which together make up the entirety of a human life. For some reason, I take comfort in this attention to the tiny, unexceptional details of the process of living. I suppose, in a way, it makes me appreciate my own, fairly boring (I’m not complaining!) life all the more.

I’d like to say that I’m the only person who has ever thought of comparing Jane Austen to Yasujiro Ozu, because I think it’s a fairly brilliant idea, but unfortunately, I’m not. A quick internet search unearthed this article, which is somewhat disappointing to the scholar in me. The downside is that my idea isn’t as original as I thought it was; the upside, however, is that there is strength in numbers, and someone apparently agrees with me, even if that agreement dates from ten years ago.

And now, I’m headed back to the television set to watch more of Ozu’s films. There are a lot of them. At this rate, I might be able to make it until springtime!

Please comment below if you there’s anything you think I should know about Ozu, or Austen, for that matter.

A Confession

Recently this article appeared in The Atlantic, all about why Taylor Swift is worthy of being the subject of a class at Harvard University. I have to say, I’m not all that impressed by the argument author Stephanie Burt makes. It’s not that I don’t think Swift deserves critical attention; of course she does. Certainly more people know more about Taylor Swift than they do about Jonathan Swift (hint: Gulliver’s Travels and A Modest Proposal), and for that reason alone, she is worthy of interest. But Burt seems to think that she’s doing something avant-garde, busting through genres as well as expectations, in offering the class.

I beg to differ.

For decades, many of us teachers have been referring to popular culture in an effort to make past literary works come alive for our students. How many of us compared the Romantic poets to John Denver, with his “back to nature” themes? Or Byron to the Beatles, as the first popular superstar (well, second, since Wordsworth also had to deal with pilgrims landing on his doorstep)? If you’ve spent any time in the classroom at all, chances are you’ve made analogies like these–and whether they are apt or not (please don’t argue with me about mine, because I’m retired now and I really don’t care any more), they succeeded in getting students to look at old works of literature with curiosity and some slight degree of affinity.

What Burt has done, however, is the opposite: she’s using the old stuff to shed light on the new songs that Swift has recently created. As she points out, “I would not be teaching this course, either, if I could not bring in other works of art, from other genres and time periods, that will help my students better understand Swift and her oeuvre.” There is nothing wrong with this approach, either. Interpretation and criticism go both ways, and the very act of exploring one work can shed light on the meaning and structure of another, very different, one. What I object to here is that Burt seems to think she needs to defend a mode of teaching that has existed since the dawn of pedagogy itself, most likely. Perhaps she’s out to blow up the canon; that, too, however, has already been done many times over. I guess my take on the whole subject is that there is nothing new under the sun. Not even Taylor Swift.

But here’s a counterargument to the article, one which may or may not be valid. Does Swift really need interpretation? Isn’t it pretty easy for people to understand and appreciate her work? It’s likely that people need more help understanding and seeing the value in Shakespeare’s works, or Charlotte Bronte’s, or Virginia Woolf’s than Taylor Swift’s songs–works which, separated from us by time and circumstance and perhaps even language itself, remain alien to us, despite the fact that we know they had enormous influence on the writers who came after them.

Still, I’m all for teaching Taylor Swift’s works (as long as we can avoid the trap of criticism, making sure we do not “murder to dissect,” as Wordsworth put it). What really bothers me, I think, is that I appear to be one of the few people on the planet who don’t really “get” Taylor Swift. I don’t dislike her songs; I just am not forcibly struck by them, or powerfully moved by them, either musically or thematically. And this despite asking my 25-year-old offspring to perform exposure therapy on me using a Spotify playlist. I realize this is probably a defect in me, a matter of faulty taste or lack of experience of some kind. After all, I felt the same way about Hamilton.

Now that the secret is out, I give all my readers permission to stop reading here. I am so clearly out of step with the times, so serious a victim of what I call “century deprivation,” that nothing I say is viable or important. I accept that designation, with what I hope is humility, bafflement, and only a small amount of FOMO, or rather COMO (Certainty Of Missing Out). As proof of the fact that I am completely out of step, I offer the following things I do find endlessly amusing. If I could, I’d offer a class on these clips alone, working to introduce students to media in its infancy and the role of comic relief in the twentieth century.

I’d start with this little snippet from the BBC Archives, in which Lt. Com Thomas Woodrooffe, reporting on the Royal Review of naval ships in 1937, inadvertently revealed that he had imbibed a bit too many toasts with his former shipmates earlier in the afternoon, as evidenced by his incoherent but hilarious commentary.

Then I’d move to this wacky clip from the dawn of television broadcasting, an almost-lost skit from the Ernie Kovacs Show. I don’t know why I find this so funny, but perhaps it has to do with the music behind it. I will freely confess that this little jewel was instrumental in getting me through a good deal of the pandemic three years ago.

And finally, here’s the Vitameatavegimin episode from I Love Lucy. I would often use this to teach my public speaking students how to give a persuasive speech using Monroe’s Motivated Sequence, a five-step formula that was often used in advertisements. (The five steps are: grab your audience’s attention, set out a problem, solve the problem, visualize the solution, and urge your audience to action.) Some of my students had never seen a single episode of I Love Lucy, which I found mind-boggling. I’d wager that most of them probably still remember this episode much better than they do Monroe’s Motivated Sequence, which is fine with me.

To all the Swifties out there, I apologize for being tone deaf. I fully understand your pain and disappointment, and I share in it. The clips above are offered as nothing more than a kind of peace offering or compensation to make up for my failure.

As my grandmother used to say, there’s no accounting for taste!

“A Lost Generation”

Heminway and friends in Spain. Photo from Wikimedia

Gertrude Stein, that enigmatic and difficult writer, is credited with coining this phrase, which Ernest Hemingway famously used as an epigraph for The Sun Also Rises (1926). I can just imagine her saying to her young acolyte, “Ernie, my boy, you and your friends, you’re all just a lost generation,” then taking another sip of absinthe, brought to her on a silver tray by Alice B. Toklas, and changing the subject to talk about Ezra Pound’s latest poems. Somewhere, long ago, I read that Stein also told Hemingway, after having read his first novel, that journalism was not the same thing as literature. I don’t remember whether this hurt the young novelist’s feelings, but it just goes to show that Stein did not pull her punches.

I’ve sometimes wondered what she meant by that term, “a lost generation.” It came to define the entire post WWI generation, to denote the feeling of hopelessness and desperation that lay just below the surface the frenetic enjoyment of the Roaring Twenties. Sadly, it was this very generation, lost as it was, that would later see its own children off to yet another global war. Of course, Stein couldn’t have known that such a fate was in store when she first used the phrase. But what exactly did she mean by “lost”?

There are several ways of being lost, some of which I will now briefly explore. A person, or I suppose an entire generation, can lose their way, wandering from place to place; this certainly constitutes being lost. It involves the panic not knowing where one is going, of growing more and more confused as time passes and one is no closer to one’s destination–perhaps even farther from it, in fact. This is certainly one way of being lost.

But a thing can be lost, too, and maybe that’s what Stein meant: when one goes to look for something and it isn’t in the place it should be, it’s lost. Perhaps Hemingway’s generation was lost in this way; it defied expectations by not doing what it should be doing or being where it should be. In other words, it had simply disappeared from view.

There’s yet another connotation of “lost,” and this one is disturbing to consider, although I’ve often thought it likely that this is what Stein meant when she called Hemingway and his friends “a lost generation.” This sense of the word means, more or less, a lost cause. When something is lost, gone for good, one simply makes do without it. Stein could have thought that Hemingway’s generation was proverbially out to lunch, that they were lost without hope of rescue–AWOL, so to speak–and that nothing good or important could be expected of them. In this sense, they were worse than merely lost; they were lost with no hope of recovery.

That would have been a pretty harsh judgment on the part of Stein, and I have no real evidence to back me up on my theory. Yet it is definitely one of the meanings of “lost.” The idea of an entire generation being disposable–disappeared, in fact–is perhaps one of the cruelest things Stein could ever have said, yet I think she might just have been capable of it.

But it’s when I think of contemporary culture that the cruelty of the term really resonates with me. In fact, I’ve been thinking for a while that the term “lost generation” has become an especially fitting phrase these days, as I watch the struggles of the generation that includes my children and my students as they try to make meaningful lives for themselves. In short, all of the definitions of “lost” listed above could apply to people 40 and under today. They are hopping from university to job, and then from one underpaid job to another, never settling down to any kind of stability for a variety of reasons: the crushing burden of student loan debt, inflation, lack of affordable housing, inflation, Covid fallout–the number of problems besetting these young people seems infinite, in fact.

Politically, they are lost, too–this generation, which has so much to be angry about, seems to be AWOL from the political scene. They are the cell phone generation, so they don’t respond to polls (usually conducted on landlines), but much worse is that they are disengaged and seem to have little hope of changing a world that has been so patently unfair to them. They are a political black hole right now; when candidates go to look for them, to canvass for and rely on their votes, many of them are simply not to be found.

But what really horrifies me is the third possible definition of “lost”– as in a “lost cause.” I believe that previous generations, including mine, have essentially cannibalized this generation, selling them on the myth of education as the solution to their problems (it isn’t, unfortunately), or the way to achieve a solid job (not true, either), or the way to solve society’s ills (eye roll here). It grieves me to say this, as a parent as well as a former college professor, but higher education seems to have ensured that this generation will indeed be lost: a lost cause, a piece of collateral damage produced by greedy student loan corporations, ill-conceived government initiatives, ignorant parents, and–to my shame–college employees who sought to fill classes and to bolster enrollment figures in an effort to ensure that their jobs were secure. Because of this willful blindness, an entire generation has been relegated to the status of a lost cause, offered up on the altar of capitalism, jingoistic slogans, and complacent greed.

This is a tragedy. That an entire generation should have to scramble for jobs, housing, and most important of all, a meaningful life, constitutes a struggle that eclipses the anomie of Hemingway’s generation. And, like that first Lost Generation, this generation bears no blame for its condition, though it is often unfairly criticized for a lack of initiative and other sins against the accepted norms of society. As I said above, it’s my generation, and the one before it, that deserves the blame, because we are the ones who helped to enslave them, either willfully or by turning our eyes away from the situation.

I am not sure what, if anything, we can do about this horrible situation. I only know that the solution to all problems begins with acknowledging that the problem exists. Only after we explore the problem in detail, fully admitting our own culpability, can we hope to provide any kind of viable solution.

That seems like a platitude to me, unfortunately. It may be that there is no solution to this problem other than the kind of revolution predicted by Karl Marx 150 years ago: an attempt to throw off the shackles forged by an eminently unfair economic system. After all, an entire generation has been exploited, cruelly offered up as a vicious sacrifice to greed and complacency.

And if revolution is the only answer to the problems besetting this new Lost Generation so be it. I know who I’ll be rooting for if it does in fact come to that.

More Thoughts on Poetry

I have had a breakthrough in my thoughts on the nature of poetry. To recap, in the last episode of this blog, I stated that over the past twenty years or so, I had somehow decided that unless I really knew what poetry was, I had no business writing it. Despite having taught more poetry than you can shake a spear at, I didn’t feel I could actually define poetry. It couldn’t be just the use of creative language, because that’s used in the best prose; nor could I say it was in the idea of moving the reader to feel a specific emotion, because that’s the motivation behind all different kinds of prose, too. What was left was simply the form of poetry, which meant that a poem is a poem because the person who created it says it’s a poem and delineates its appearance, using line breaks and stanzas, in such a way to suggest that it is a poem.

That’s fair, of course, but not very satisfying. So I came up with the idea of busting apart the entire idea of genre, and asking if it really matters what we call a piece of writing. Whether it’s prose or poetry, if we feel moved by it, if it elicits a vivid picture or sensation or thought, then it’s good writing. But something in me was left unsatisfied, and so I did what I always do when I have a tricky little intellectual problem: I simply tried to forget about it.

But a few days ago I had an idea about the motivation behind writing poetry. Perhaps, I postulated, that’s what really differentiates a poem from a prose piece: the writer’s motivation. By chance, I was helped along in this line of thinking–about the whole idea of why we write and read poems–from, of all things, a very fine science writer named Ed Yong.

You might remember Yong from his insightful articles on the Covid-19 pandemic, which were published in the Atlantic. I knew Yong to be an excellent writer, so when I saw his book An Immense World: How Animal Senses Reveal the Hidden Realms around Us (2022), I picked it up and read it.

But how does a book on natural science relate to poetry? Bear with me a few minutes and I’ll explain.

Yong’s book is all about the way in which animals’ perceptions are different, sometimes starkly, from our own. It’s also about how human beings have misunderstood and misrepresented the way animals perceive things for millennia because we’re so immured in our own self-contained perceptive world. In other words, by thinking of animals in purely human terms, we limit our view of them.We also limit our view of the world itself. What we perceive, Yong argues throughout the book, determines in large part what we think and how we feel–and, most important of all for my point here, how we process the world we live in.

Yong uses the term “Umwelt” throughout the book to refer to an animal’s perceptual world, a term that means “environment” in German but has taken on a new flavor thanks to the scientist Jakob von Uexküll, who first used the word in 1909 in this specific sense. A dog’s “umwelt,” then, reflects the way it perceives the world, a world in which different colors are highlighted, scents linger in the air long after their source has moved away, and so on.

So how does this all relate to poetry and why we read and write it? Simply this: I propose that a poem’s primary task is to present an Umwelt for its reader. To do this, the poet creates a piece of writing that closely reflects (if she is lucky) the way she sees the world and presents it to the reader as a gift. If the reader accepts the gift, his reward for reading the poem attentively is being able to glimpse the world afresh through an Umwelt that is different from his own. In other words, the reader gets to see the world, or at least a piece of it, through a different perceptual grid, an experience that can be entertaining, sometimes unsettling, often thought-provoking, and, at its best, revelatory.

Is this different from prose? Perhaps not too much, but I’d argue that the very choice to write a poem instead of an essay, short story, or novel indicates something–I’d say something vitally important– about the writer’s Umwelt. The other forms of writing have messages they want to relay. The poem, however, exists simply to allow its reader to step into its author’s Umwelt for a few moments in order to experience the world differently.

So there you have it. For me, at least for now, discovering why we write poems has given me a new understanding and appreciation of poetry. It means I don’t have to decide whether I like or dislike a poem, nor do I have to justify my reaction to it. Poetry simply is; there’s no more point in arguing whether a poem is good or bad than there is in arguing with my dog Flossie whether her way of experiencing the forest we walk through every morning is better than mine, or whether mine is better than hers. If I got the chance to experience the world through her senses, you can bet I’d take it. Curiosity alone would drive me to it.

At the most basic level, then, I write poetry to demonstrate how I experience the world. I read poetry to discover how other people experience the world. In the end, we read and write poetry to bridge the gap between ourselves and others. It’s about sharing our Umwelten, which, in the end, means it’s all about breaking out of our own little self-contained worlds and joining together to form a bigger, better sense of the world we live in.

Spring has finally come to Northern Michigan, where I live. One might think that would make things easier, that creative juices would flow as freely as the sap in the trees and plants that are absorbing the sunshine. But unfortunately that’s not how it works. Spring is a dicey time here, and not just because of the mud left behind by the melting of the snow. (Another thing that’s left behind is shovel-loads of dog feces, which the receding snow banks offer up as yet another sacrifice to the disappearance of winter.) The truth is that when the weather clears and the outside world looks brighter, sometimes it’s disconcerting when your internal world hasn’t kept pace. It can be depressing, because it’s hard to kick yourself in gear to get things done, and in spring time, you have no excuse not to.

So when I saw that a local store was offering a poetry workshop during the month of April in honor of National Poetry Month, I signed up for it on a whim. I don’t know whether I will write any poems as a result of this workshop, but that’s not really the point. What I’d like to happen is for me to rekindle my creative impulses, and so far, though I’m still wrestling with SI (Springtime Inertia), I think I can detect the beginning of some movement towards more of a creative flow.

But the workshop has reminded me of an important question I’ve had over the last few years–one that may be unanswerable but still deserves to be asked:

What makes good writing?

It’s a question I’ve been pondering seriously, even though it might sound like I’m being flippant. Having taught literature throughout my professional career, I should be able to answer that question without too much trouble. For example, as a writing instructor, I’d say, “Good writing is clear, succinct, and precise. It shows consideration for the reader by adhering to the commonly accepted rules of grammar, spelling, and punctuation. It connects the ideas it presents in a way that is easy to read and understand.” I think that’s a good start for a college composition class, anyway.

But clearly this will not work for most creative writing. Poets, for example, often show little consideration for their readers. In fact, I’m not sure contemporary poets actually write with readers in mind; often they seem to be jotting down notes to themselves for later reading. Not that there is anything wrong with that at all–this is, after all, why I am interested in poetry at this point in my life. I’ve realized that there are certain subjects and ideas I want to explore that are better suited for poems than for short essays like this one, and I think it’s worth the time and effort to try to articulate them in poetic form.

However, let’s get back to the question: what does make good creative writing? I am having a hard time formulating an answer. As I get older, I seem to be suffering from the reverse of the Dunning-Kruger Effect. I am less sure of everything I think about, even questions which I once felt sure of the answer to. But as far as good writing goes, I have come up with a provisional answer, and although I don’t find it very satisfying, I thought I’d try it out here.

I will begin by saying that the question itself is misguided. That’s because there is no such thing as good writing–only good reading. When we ask the question “what makes good writing?” we’re actually looking through the wrong end of the telescope. A good reader, I submit, is able to read almost anything and be enriched by the experience. A good reader will read any text, be it a poem, essay, novel, or piece of non-fiction, and find connections to other works. Of course, this is not to say there is no such thing as bad writing–I think we all know that it does exist–but that is a different issue. Seeing examples of bad writing will help us understand what not to do, but it won’t really help creative writers learn what to do to create good writing, so once again, I think it’s best to turn the question on its head and focus on what makes a good reader rather than what makes good writing.

After all, it has to be far easier to learn the skills required to be a good reader than to learn to be a good writer. And there are all sorts of advantages for the good reader–not only personal and professional, but social and political, as well. I think I’ll have to ponder on this one for a week or two, however, before I begin to identify how to create good readers and what makes good reading. For now, though, I’ll end with the suggestion that the world would surely be a better place if there were more good readers in it. I’ll go even further and add that maybe we’d all better get to work to see how we can do our part to create good, solid readers, because good readers make good citizens, and we can surely use a great many more good citizens in our world right now.

Smith versus Shelley: A Tale of Two Poems

Yesterday, I co-led a poetry discussion group at one of the area retirement communities, something I’ve done for the last few years. It’s been a really interesting experience–there’s so much to learn and discuss about even mediocre poems, and I enjoy hearing the participants share their ideas about the poems, as well as the stories and memories these poems evoke.

I choose the poems at random, with very little rhyme (pardon the pun) or reason to my choice. One of the poems yesterday was Percy Bysshe Shelley’s “Ozymandias.” Yes, I proffered that old chestnut to the group, even though I’d read it thousands of times and have taught it in many classes. I just wanted another look at it, I guess, and it’s fun to do that with company. What I wasn’t expecting, however, was my co-leader bringing in another poem on the same exact topic, written at the same time.

It happens that Shelley had a friend, the prosaically named Horace Smith, and the two of them engaged in a sonnet writing contest, on the agreed-upon subject of Ancient Egypt and, presumably, Rameses II, also known as Ozymandias. We remember Shelley’s poem: every anthology of 19th-century British literature probably contains it. However, Smith’s sonnet is largely forgotten. In fact, I’ll offer a true confession here: despite having taught Brit lit for decades, I’d not heard of Smith’s version until a couple of days ago.

It turns out that Smith was himself an interesting fellow. He wrote poetry, but was not averse to making money, unlike his younger friend Shelley. Smith was a stock-broker, and made a good living, while also, according to Shelley, being very generous with it. He sounds like a generally good guy, to be honest, something which Shelley aspired to be, but was really not. For all intents and purposes, Shelley was a masterful poet but a real asshole on a personal level, and a bit of an idiot to boot. (What kind of a fool goes sailing in a boat that he didn’t know how to operate, in a storm, when he didn’t even know how to swim?) Smith knew how to make and keep friends as well as money, two things that Shelley was not very good at, by all accounts.

At any rate, I thought it might be interesting to compare the two poems. Of course, we assume Shelley’s poem will be better: it’s the one that is in every anthology of 19th-Century British literature, after all, while I–with a Ph.D. in the subject, for whatever that’s worth–didn’t even know of the existence of Smith’s poem until a few days ago. But maybe, just maybe, there’s something valuable in the stockbroker’s poem that has been missed–and wouldn’t that make a fine story in and of itself?

So here are the two poems, first Shelley’s, and then Smith’s.

Ozymandias (Shelley)

I met a traveller from an antique land,

Who said—“Two vast and trunkless legs of stone

Stand in the desert. . . . Near them, on the sand,

Half sunk a shattered visage lies, whose frown,

And wrinkled lip, and sneer of cold command,

Tell that its sculptor well those passions read

Which yet survive, stamped on these lifeless things,

The hand that mocked them, and the heart that fed;

And on the pedestal, these words appear:

My name is Ozymandias, King of Kings;

Look on my Works, ye Mighty, and despair!

Nothing beside remains. Round the decay

Of that colossal Wreck, boundless and bare

The lone and level sands stretch far away.”

Ozymandias (Smith)

In Egypt’s sandy silence, all alone,
Stands a gigantic Leg, which far off throws
The only shadow that the Desert knows:—
“I am great OZYMANDIAS,” saith the stone,
“The King of Kings; this mighty City shows
The wonders of my hand.”— The City’s gone,—
Naught but the Leg remaining to disclose
The site of this forgotten Babylon.

We wonder — and some Hunter may express
Wonder like ours, when thro’ the wilderness
Where London stood, holding the Wolf in chace,
He meets some fragment huge, and stops to guess
What powerful but unrecorded race
Once dwelt in that annihilated place.

Now, I’d say Shelley definitely has the advantage in terms of poetic language, as well as the narrative situation. His words are sibilant and flowing, and it’s a stroke of genius to make the story come from not the speaker of the poem, but from a traveler from an antique land; it makes the scene seem even more authentic. The alliteration in the last two lines (“boundless” and “bare” as well as “lone” and “level”) is a deft touch as well.

I’d also say that Shelley’s choice of the half shattered face is much better than Smith’s. There’s something much more poetic about a sneering face, even if it’s a half of a face, than a gigantic leg. There’s no way on earth Smith could have made a gigantic leg sound poetic, and that hampers the poetic feel of his sonnet, which is a bit of a shame.

Or is it?

Perhaps Smith wasn’t going for poetic feel here at all. In fact, I’d argue that he definitely wasn’t thinking along the same lines Shelley was. There are obvious similarities between the two poems. We still get the empty site, the desolation of the “forgotten Babylon” that powers so much of Shelley’s version, but it turns out that Smith is interested in something completely different. Where Shelley’s poem comments on the nature of arrogance, a human pride that ends in an ironic fall, Smith’s presents the reader with a different kind of irony. His version is much grander. In fact, it’s a cosmic irony that Smith is grappling with here, as the poem comments on the inevitable rise and fall of human civilization. What I find astounding is that in 1818, just as England was beginning its climb up to the pinnacle of world dominance for the next two centuries, Smith was able to imagine a time when the world he knew would be in tatters, with nothing remaining of the biggest city on earth, save as a hunting ground for the presumably savage descendants of stockbrokers like himself. Smith’s imagination was far more encompassing that Shelley’s, given this kind of projection into the far future.

All told, Shelley’s poem is probably the better one: it’s more quotable, after all, and no matter how much I love Smith’s message and projection into the future, he just doesn’t have the choice of words and rhythm that Shelley does. But need we really limit ourself to just one of these poems, anyway? I’d say we’ve gleaned about as much as we can from Shelley’s “Ozymandias.” Perhaps ours is an age in which we can appreciate Smith’s vision of a far distant future. Empires rise and fall, waters ebb and flow, and civilizations come and go. Smith, with his Hunter coursing through what was once London, paints this idea just as well as Shelley does with his decayed Wreck. There’s room for both of these poems in our literary canon.

How We Got Here: A Theory

The United States is a mess right now. Beset by a corrupt president and his corporate cronies, plagued by a — um — plague, Americans are experiencing an attack on democracy from within. So just how did we get to this point in history?

I’ve given it a bit of thought, and I’ve come up with a theory. Like many theories, it’s built on a certain amount of critical observation and a large degree of personal experience. Marry those things to each other, and you can often explain even the most puzzling enigmas. Here, then, is my stab at explaining how American society became so divisive that agreement on any political topic has become virtually impossible, leaving a vaccuum so large and so empty that corruption and the will to power can ensure political victory.

I maintain that this ideological binarism in the United States is caused by two things: prejudice (racism has, in many ways, always determined our political reality), and lack of critical thinking skills (how else could so many people fail to see Trump for what he really is and what he really represents?) Both of these problems result from poor education. For example, prejudice certainly exists in all societies, but the job of a proper education in a free society is to eradicate, or at least to combat, prejudice and flawed beliefs. Similarly, critical thinking skills, while amorphous and hard to define, can be acquired through years of education, whether by conducting experiements in chemistry lab or by explicating Shakespeare’s sonnets. It follows, then, that something must be radically wrong with our educational system for close to half of the population of the United States to be fooled into thinking that Donald Trump can actually be good for this country, much less for the world at large.

In short, there has always been a possibility that a monster like Trump would appear on the political scene. Education should have saved us from having to watch him for the last four years, and the last month in particular, as he tried to dismantle our democracy. Yet it didn’t. So the question we have to ask is this: Where does the failure in education lie?

The trendy answer would be that this failure is a feature, not a bug, in American education, which was always designed to mis-educate the population in order to make it more pliable, more willing to follow demogogues such as Trump. But I’m not satisfied with this answer. It’s too easy, and more important, it doesn’t help us get back on track by addressing the failure (if that’s even possible at this point). So I kept searching for an explanation.

I’ve come up with the following premises. First, the divisions in the country are caused by a lack of shared values–this much is clear. For nearly half the American people, Trump is the apotheosis of greedy egotism, a malignant narcissist who is willing to betray, even to destroy, his country in order to get what he wants, so that he can “win” at the system. For the other half, Trump is a breath of fresh air, a non-politician who was willing to stride into the morass of Washington in order to clean it up and set American business back on its feet. These two factions will never be able to agree–not on the subject of Trump, and very likely, not on any other subject of importance to Americans.

It follows that these two views are irreconcilable precisely because they reflect a dichotomy in values. Values are the intrinsic beliefs that an individual holds about what’s right and wrong; when those beliefs are shared by a large enough group, they become an ethical system. Ethics, the shared sense of right and wrong, seems to be important in a society; as we watch ours disintegrate, we can see that without a sense of ethics, society splinters into factions. Other countries teach ethics as a required subject in high school classes; in the United States, however, only philosophy majors in universities ever take classes on ethics. Most Americans, we might once have said, don’t need such classes, since they experience their ethics every day. If that ever was true, it certainly isn’t so any more.

Yet I would argue that Americans used to have an ethical belief system. We certainly didn’t live up to it, and it was flawed in many ways, but it did exist, and that’s very different from having no ethical system at all. It makes sense to postulate that some time back around the turn of the 21st century, ethics began to disappear from society. I’m not saying that people became unethical, but rather that ethics ceased to matter, and as it faded away, it ceased to exist as a kind of social glue that could hold Americans together.

I think I know how this happened, but be warned: my view is pretty far-fetched. Here goes. Back in the 1970s and 1980s, literary theory poached upon the realm of philosophy, resulting in a collection of theories that insisted a literary text could be read in any number of ways, and that no single reading of a text was the authoritative one. This kind of reading and interpretation amounted to an attack on the authority of the writer and the dominant ideology that produced him or her, as it destabilized the way texts were written, read, and understood. I now see that just as the text became destabilized with this new way of reading, so did everything else. In other words, if an English professor could argue that Shakespeare didn’t belong in the literary canon any longer, that all texts are equally valid and valuable (I’ve argued this myself at times), the result is an attack not only on authority (which was the intention), but also on communality, by which I mean society’s shared sense of what it values, whether it’s Hamlet or Gilligan’s Island. This splintering of values was exacerbated by the advent of cable television and internet music sources; no one was watching or listening to the same things any more, and it became increasingly harder to find any shared ideological place to begin discussions. In other words, the flip side of diversity and multiplicity–noble goals in and of themselves–is a dark one, and now, forty years on, we are witnessing the social danger inherent in dismantling not only the canon, but any system of judgment to assess its contents as well.

Here’s a personal illustration. A couple of years ago, I taught a college Shakespeare class, and on a whim I asked my students to help me define characters from Coriolanus using Dungeons and Dragons character alignment patterns. It was the kind of exercise that would have been a smashing success in my earlier teaching career, the very thing that garnered me three teaching awards within five years. But this time it didn’t work. No one was watching the same television shows, reading the same books, or remembering the same historical events, and so there was no way to come up with good examples that worked for the entire class to illustrate character types. I began to see then that a splintered society might be freeing, but at what cost if we had ceased to be able to communicate effectively?

It’s not a huge leap to get from that Shakespeare class to the fragmentation of a political ideology that leaves, in the wreckage it’s produced, the door wide open to oligarchy, kleptocracy, and fascism. There are doubtless many things to blame, but surely one of them is the kind of socially irresponsible literary theory that we played around with back in the 1980s. I distinctly remember one theorist saying something to the effect that no one has ever been shot for being a deconstructionist, and while that may be true, it is not to say that deconstructionist theory, or any kind of theory that regards its work as mere play, is safe for the society it inhabits. Indeed, we may well be witnessing how very dangerous unprincipled theoretical play can turn out to be, even decades after it has held sway.

Convent-ional Trends in Film and Television

Lately I’ve been spending quite a bit of time with the Aged Parent , and one thing we do together–something we’ve rarely done before–is watch television shows. My mother, deep in the throes of dementia, perks up when she sees Matt Dillon and Festus ride over the Kansas (it is Kansas, isn’t it?) plains to catch bad guys and rescue the disempowered from their clutches. Daytime cable television is filled with Westerns, and I find this fascinating, although I’ve never been a fan of them in the past. Part of my new-found fascination is undoubtedly inspired by Professor Heather Cox Richardson’s theory–presented in her online lectures as well as her Substack newsletter–that the United States’s fascination with the Western genre has a lot to do with the libertarian, every-man-for-himself ideal most Westerns present. I think she’s got a point, but I don’t think that this alone explains our fascination with Westerns. This, however, is an argument I’ll have to return to at a later date, because in this blog post, what I want to talk about is nuns.

Yes–that’s right–Catholic nuns. What was going on in the 1950s and ’60s that made the figure of the young, attractive nun so prevalent in films and television? Here, for example, is a short list of the movies that feature nuns from the 1960s:

  1. The Nun’s Story (1959) with Audrey Hepburn
  2. The Nun and the Sergeant (1962), itself a remake of Heaven Knows, Mr. Allison (1957)
  3. Lilies of the Field (1963) with Sidney Poitier
  4. The Sound of Music (1965), no comment needed
  5. The Singing Nun (1966) starring Debbie Reynolds
  6. The Trouble with Angels (1966) with Rosalind Russsell and Hayley Mills
  7. Where Angels Go, Trouble Follows (1968), the sequel to #6
  8. Change of Habit (1969), starring the strangely matched Mary Tyler Moore and Elvis Presley (!)

The fascination with nuns even bled over into television, with the series The Flying Nun (1967-1970), starring a post-Gidget Sally Field. This show, with its ridiculous premise of a nun who can fly, seems to have ended the fascination with nuns, or perhaps its bald stupidity simply killed it outright. From 1970 until 1992, when Sister Act appeared, there seemed to be a lull in American movies featuring nuns. Incidentally, the films I’ve mentioned here all feature saccharine-sweet characters and simple plots; in a typically American fashion, many of the difficult questions and problems involved in choosing a cloistered life are elided or simply ignored. There are, however, other movies featuring nuns that are not so wholesome; Wikipedia actually has a page devoted to what it terms “Nunsploitation.” These films, mostly foreign, seem more troubling and edgier. I leave an analysis of such films to another blogger, however, because what I really want to investigate is this: why was American culture so enamored, for the space of a decade, with nuns and convent life? I’ve argued previously that popular culture performs the critical task of reflecting and representing dominant ideologies, so my question goes deeper than just asking, “Hey, what’s with all these nuns?” Rather, it seeks to examine what conditions caused this repetitive obsession about nuns in a country that prided itself on the distance between religion and politics and, at least superfiically, religion’s exclusion from American ideology.

I have some ideas, but nothing that could be hammered together neatly enough to call a theory to explain this obsession, and so I will be looking to my readers to provide additional explanations. Surely the box-office success of films starring Audrey Hepburn, Debbie Reynolds, Sidney Poitier, and Julie Andrews count for something: Hollywood has always been a fan of the old “if it worked once, it should work again” creative strategy. But I think this might be too simple an explanation. I’ll have another go: perhaps in an era when women were beginning to explore avenues to power, self-expression, and sexual freedom, the image of a contained and circumscribed nun was a comfort to the conservative forces in American society. It’s just possible that these nuns’ stories were a representation of the desire to keep women locked up, contained, and submissive. On the other hand, the image of the nun could be just the opposite, one in which women’s struggle for independence and self-actualization was most starkly rendered by showing religious women asserting their will despite all the odds against them.

I think it’s quite possible that both these explanations, contradictory as they seem, might be correct. Certainly the depiction of women who submit to being controlled and defined by religion presents a comforting image of a hierarchical past to an audience that fears not only the future but the present as well (we should remember that the world was experiencing profoundly threatening social and political upheaval in the late 1960s). Yet at the same time, the struggle many of these nun-characters undergo in these films might well be representative of non-religious women’s search for meaning, independence, and agency in their own lives.

As I said, I have more questions than answers, and I will end this post with an obvious one: what effect did these films have on the general public? We’ve briefly explored the idea of where such movies came from and what they represent in the American ideology that produced them, but what did they do to their audiences? Was there any increase in teenage girls joining convents in the 1970s, after these films played in theatres and later, on television? What did the religious orders themselves have to say about such films? I’d be interested in learning the answers to these questions, so readers, if you have any ideas, or if you just want to compare notes and share your impressions, please feel free to comment!

How the Study of Literature Could Save Democracy

Beowulf MS, picture from Wikipedia

Usually, I am not one to make grand claims for my discipline. There was a time, back when I was a young graduate student in the 1980s, that I would have; perhaps even more recently, I might have argued that understanding ideology through literary theory and criticism is essential to understanding current events and the conditions we live in. But I no longer believe that.

Perhaps in saying this publicly, I’m risking some sort of banishment from academia. Maybe I will have to undergo a ritual in which I am formally cashiered, like some kind of academic Alfred Dreyfus, although instead of having my sword broken in half and my military braids ripped to shreds, I will have my diploma yanked from my hands and trampled on the ground before my somber eyes. Yet unlike Dreyfus, I will have deserved such treatment, because I am in fact disloyal to my training: I don’t believe literary theory can save the world. I don’t think it’s necessary that we have more papers and books on esoteric subjects, nor do I think it’s realistic or useful for academics to participate in a market system in which the research they produce becomes a commodity in their quest for jobs, promotions, or grant opportunities. In this sense, I suppose I am indeed a traitor.

But recently I have realized, with the help of my friend and former student (thanks, Cari!), that literature classes are still important. In fact, I think studying literature can help save our way of life. You just have to look at it this way: it’s not the abstruse academic research that can save us, but rather the garden-variety study of literature that can prove essential to preserving democracy. Let me explain how.

I’ll begin, as any good scholar should, by pointing out the obvious. We are in a bad place in terms of political discourse–it doesn’t take a scholar to see that. Polarizing views have separated Americans into two discrete camps with very little chance of crossing the aisle to negotiate or compromise. Most people are unwilling to test their beliefs, for example, preferring to cling to them even in the face of contradictory evidence. As social psychologists Elliot Aronson and Carol Tavris point out in a recent article in The Atlantic, “human beings are deeply unwilling to change their minds. And when the facts clash with their preexisting convictions, some people would sooner jeopardize their health and everyone else’s than accept new information or admit to being wrong.” They use the term “cognitive dissonance,” which means the sense of disorientation and even discomfort one feels when considering two opposing viewpoints, to explain why it is so hard for people to change their ideas.

To those of us who study literature, the term “cognitive dissonance” may be new, but the concept certainly is not. F. Scott Fitzgerald writes, in an essay which is largely forgotten except for this sentence, “the test of a first-rate intelligence is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function” (“The Crack-Up,Esquire Magazine, February 1936). In addition, cognitive dissonance isn’t that far removed from an idea expressed by John Keats in a letter he wrote to his brothers back in 1817. He invents the term “Negative Capability” to describe the ability to remain in a liminal state of doubt and uncertainty without being driven to come to any conclusion and definitive belief. Negative capability, in other words, is the capacity to be flexible in our beliefs, to be capable of changing our minds.

I believe that the American public needs to develop negative capability, lots of it, and quickly, if we are to save our democracy.

But there’s a huge problem. Both Fitzgerald and Keats believe that this function is reserved only for geniuses. In their view, a person is born with this talent for tolerating cognitive dissonance: you either have it–in which case you are incredibly gifted–or you don’t. In contrast, Aronson and Tavris clearly believe it’s possible to develop a tolerance for cognitive dissonance: “Although it’s difficult, changing our minds is not impossible. The challenge is to find a way to live with uncertainty…” While their belief in our ability to tolerate cognitive dissonance and to learn from it is encouraging, it is sobering that they do not provide a clear path toward fostering this tolerance.

So here’s where the study of literature comes in. In a good English class, when we study a text, whether it’s To Kill a Mockingbird or Beowulf, students and teacher meet as more or less equals over the work of literature in an effort to find its meaning and its relevance. Certainly the teacher has more experience and knowledge, but this doesn’t–or shouldn’t–change the dynamic of the class: we are all partners in discovering what the text has to say in general, and to us, specifically. That is our task. In the course of this task, different ideas will be presented. Some interpretations will be rejected; some will be accepted. Some will be rejected, only to be later accepted, even after the space of years (see below for an example).

If we do it well, we will reach a point in the discussion where we consider several differrent suggestions and possibilities for interpretation. This is the moment during which we become experts in cognitive dissonance, as we relish interpretive uncertainty, examining each shiny new idea and interpretation with the delight of a child holding up gorgeously colored beads to the light. We may put a bead down, but it is only to take up another, different one–and we may well take up the discarded bead only to play with it some more.

The thing that makes the study of literature so important in this process is that it isn’t really all that important in the grand scheme of things. To my knowledge, no one has ever been shot for their interpretation of Hamlet; the preservation of life and limb does not hang on an precise explanation of Paradise Lost. If we use the study of literature as a classroom designed to increase our capacity for cognitive dissonance, in other words, we can dissipate the highly charged atmosphere that makes changing our minds so difficult. And once we get used to the process, when we know what it’s like to experience cognitive dissonance, it will be easier to for us to tolerate it in other parts of our lives, even in the sphere of public policy and politics.

If I seem to be writing with conviction (no cognitive dissonance here!), it’s because I have often experienced this negative capability in real time. I will give just two examples. The first one occurred during a class on mystery fiction, when we were discussing the role of gossip in detective novels, which then devolved into a discussion on the ethics of gossip. The class disagreed violently about whether gossip could be seen as good or neutral, or whether it was always bad. A loud (and I mean loud!) discussion ensued, with such force that a janitor felt compelled to pop his head into the classroom–something that I had never seen happen either before or since then–to ask if everything was ok. While other teachers might have felt that they had lost control of the classroom, I, perversely, believe that this might have been my most successful teaching moment ever. That so many students felt safe enough to weigh in, to argue and debate passionately about something that had so little real importance suggested to me that we were exercising and developing new critical aptitudes. Some of us, I believe, changed our minds as a result of that discussion. At the very least, I think many of us saw the topic in a different way than we had to begin with. This, of course, is the result of experiencing cognitive dissonance.

My second example is similar. At the end of one very successful course on Ernest Hemingway, my class and I adjourned for the semester to meet at a local bar, at which we continued our discussion about The Sun Also Rises. My student Cari and I got into a very heated discussion about whether the novel could be seen as a pilgrimage story. Cari said it was ; I vehemently disagreed. The argument was fierce and invigorating–so invigorating, as a matter of fact, that at one point a server came to inquire whether there was something wrong, and then a neighboring table began to take sides in the debate. (For the record, I live in Hemingway country, and everyone here has an opinion about him and his works.) Cari and I left the bar firmly ensconced in our own points of view, but a couple of years ago–some three years after the original argument occurred–I came to see it from Cari’s point of view, and I now agree with her that The Sun Also Rises can be seen as a sort of pilgrimage tale. It took a while, but I was able to change my mind.

It is this capacity to change one’s mind, I will argue, that is important, indeed, indispensable, for the democratic process to thrive.

In the end, it may well be that the chief contribution that good teachers of literature make to culture is this: we provide a safe and accessible place for people to learn what cognitive dissonance feels like, and in doing so, we can help them acquire a tolerance for it. This tolerance, in turn, leads to an increase in the ability to participate in civil discourse, which is itself the bedrock of democratic thought and process. In other words, you can invest in STEAM classes all you want, but if you really want to make people good citizens, do not forget about literature courses.

In view of this discovery of mine, I feel it’s my duty to host a noncredit literature class of sorts in the fall, a discussion-type newsletter that covers the great works of English literature–whatever that means–from Beowulf to the early Romantic period, in which discussion is paramount. If you’re interested or have suggestions, please let me know by commenting or messaging me, and I’ll do my best to keep you in the loop.

And in the meantime, keep your minds open! Cognitive dissonance, uncomfortable as it is, may just be what will keep democracy alive in the critical days to come.