Our Pushcart Nominations for 2015

Our Pushcart Nominations for 2015

The Editorial Staff


Each year we have the opportunity and challenge of nominating six of what we consider our best poems and stories for the Pushcart Prize. The decision is a difficult one. Every poem, story or essay published in Lost Coast Review is exceptional and has only been published after having been chosen from among hundreds of other submissions. Our poetry acceptance rate has dropped below 15% and our short story acceptance rate is below 10%. These numbers are getting even smaller with each passing year, as we receive more and more submissions to fill a limited amount of space in our journal.

Despite the difficulty of choosing among many outstanding publications, we have to make a choice. This year we first narrowed the field to 15 poems and 5 stories. From this list, 4 poems and 2 stories were chosen. We will first list our 6 nominations, and then the 14 honorable mentions, which gave each of our nominations a run for its money.

Pushcart nominees: (in order of publication)


“Singular” by Len Krisak

“It was Late and We Stopped Talking but We

Didn’t Hang Up” by Mark Jackley

“When You Were Gone” by Mark Burke

“Your Keys” by Heather Browne


 Short Stories:

 Family Honor by Frank Pray

 Stephen Crane & the Mentor by Tom Tolnay


The other poems and stories, which were nominated by staff but did not receive enough of our combined votes, included:



 “Cleansing the Haunted House” by Anca Vlasopolos

“Fresh Air” by Michael Mark

“Beautiful are the Turtles and Snails” by Alex Hughes

“Mailbox” by Mark Jackley

“Levon Helm Reborn as a Snapping Turtle” by Mark Jackley

“Scenes Out of Childhood” by Barbara Lightner

“Sunday Drive” by Georgia Tiffany

“Autumn Loss” by Joseph Lisowski

“Mid-November” by Joseph Lisowski

“A Peppermint” by Hadley Hury

“Untitled (Still Life)” by Gayane Hovsepyan


Short Stories:

Jackpot by Bethany Snyder

Need by Robin Wyatt Dunn

You Can’t Lose Them All by Penn Javdan


It goes without saying that all these decisions are subjective and another group of editors would have made different ones. Congratulations not only to our Pushcart nominees and to the runner-ups, but to all who achieved publication in Lost Coast Review in 2015.


Sexism in Literary Editing and Publishing

Sexism in Literary Editing and Publishing

Lost Coast Review Editorial Staff



Each year since 2010, VIDA: Women in Literary Arts has conducted the “VIDA Count,” which assesses “thirty-nine literary journals and well-respected periodicals, counting genre, book reviewers, books reviewed, and journalistic bylines to offer an accurate assessment of the publishing world.” In 2014, as in previous years, the “count” revealed a field dominated by men. This finding unleashed a storm of controversy in social media, either affirming the validity of the count as “meaningful” or castigating it as trivial and “meaningless.” As observers of this controversy, we at Lost Coast Review asked ourselves about our own gender biases, a question that raised a more basic question as to how to assess if such a bias existed. We asked each of our editors to take a stab at identifying how to determine if sexism exists in a literary magazine or publisher’s actions. Here are four of our staff’s answers:



Casey Dorman—Editor in Chief: Male

There is gender bias in literary editing and publication if the percentage of one gender in either field is disproportionate to the percentage in the population of those who are eligible to enter that field. In the U.S., those possessing entry qualifications, at least in terms of education, are, in fact, disproportionately women, both in terms of college degrees and specifically degrees in English or journalism. Thus any lack of female representation in literary magazine editing or publication rates represents a bias operating within the profession itself. The 2014 VIDA results substantiate that there is a lack of female representation. Some will argue that just looking at employment or publication numbers is not a fair assessment of gender bias, since it is only a surface indicator and does not necessarily reflect such things as attitudes, but it certainly reflects job opportunities and earning power, so I believe it is a fair metric, though not the only one.  Still relying upon numbers, a brief look at the most prized literary awards, such as the National Book Award, reveals a similar disproportion. Since its inception in 1950, the National Book Award for fiction has been awarded to only 16 women and 9 of those have been in the last 20 years. The NBA for poetry has fared no better. There have been only 14 women winners since 1950, and half of those have been in the last twenty years. Things have gotten better in the last 20 years, but the disproportion still exists. I will leave it to others to suggest an explanation for these facts.


Jasmine Romero—Intern: Female

Sexism in the publishing industry is apparent in the way editors judge content and genre based on an author’s gender. Female authors still feel the need to abbreviate their own name to appear more gender neutral, so that more people will pick their book off a shelf without making assumptions about the content. There is also the related issue of a woman’s Science Fiction or Fantasy novel being categorized as a Romance, rather than the futuristic adventure that it is, simply because it contains a love story within the plot. However, the real issue is that because a woman’s story contains romance or sex, regardless of the rest of the plot, it is often regarded as trash that is ruining the genre for the “good” male writers. Yet, there are Science Fiction and Fantasy stories written by men that include sex, rape, and treat women as objects, that aren’t being discussed in a negative light—at least not enough to invoke large scale change. The publishing industry is still a boy’s club. Rather than focusing on content and the diversity of the characters—the important pieces of a good story - editors make an author’s gender a determining factor in whether a story is worthy of being printed. This might be one of the reasons women are currently top-ranking in the self-publishing industry. No one is telling them they aren’t good enough, and people are buying their books for the content. It’s not about who they know, but what they can write—which is what publishing should be about in the first place. 


Diane Rogers—Short Story Editor: Female

When there’s an elephant in the room, asking “how big” is a rather pointless exercise. Emphasizing the size of the problem shifts the significance of inquiry away from the more important discussion, namely the impact and risks imposed by the elephant’s presence. 

As literary editors, our job is to steer clear of reductionistic debate involving the quantification of bias. We serve readers best by engaging in an honest exploration of the effect of gender bias on society and our role in it. Such a conversation begins by shamelessly opening our editorial kimonos to expose the soft underbelly of our individual literary preferences. Revealing the grounds on which we make our literary selection means getting up close and personal with the narratives that shape our thinking. 

Bias is defined as the systematic preference of one group at the exclusion of another. In literature, this translates to the prejudicial selection of singular style or voice. In her groundbreaking TED Talk, Novelist Chimamanda Adichie warns that limiting perspectives in literature leads to “critical misunderstanding” of gender and culture. 

Although some studies on gender bias suggest that male characters outnumber female characters more than two to one, it is the impact of gender portrayal that has social scientists most concerned. Peterson and Loch (1990) suggest that social attitudes are shaped by literature. Beginning in childhood, reading has been shown to influence self-concept as well as perceptions of gender roles. As attitudes are guided by what people read, editors have a duty to expand the chorus of voices to include a broader range of perspectives and gender depictions in literature. 

The editorial journey away from gender bias involves three steps. It begins by evaluating our selection criteria. Do the stories and poetry we choose reflect a variety of perspectives? Are gender portrayals stereotypical? Have we dared to venture outside the dominant narrative? The second editorial responsibility is to ensure that we explore a chorus of social and cultural themes and narrative voices. Offering a variety of material fulfills a third editorial function, education. The world is diverse and literature has a role in educating the public through contrasting viewpoints.

The elephant never goes away on its own. Neither will gender bias. Removing bias (and elephants) is best handled by exposing its presence, exploring its impact and educating ourselves on how to coax it out from the public domain.


Hadley Hury—Film Review Editor: Male

The more I’ve thought about this question the more challenging becomes the effort to make useful comment. I agree with an observation by novelist Cheryl Strayed in “Bookends”, Book Review section, May 17 issue, The New York Times: “I don’t think there’s a secret commission of readers and editors dedicated to the mission of keeping women writers down. I think we live in a patriarchy , which means that everything we observe, desire, and consume is in some essential way informed by gender assumptions that privilege men.” So it does not surprise me that some research now suggests that the majority of literary review editors and contributors are male. If this majority of male editors and contributors is reliably verified by more than one or two studies with valid research methods—assessing all U.S. literary reviews and journals longitudinally over an appropriate time—then the question has an answer. The number is irrefutably what the number is and, in the most quantifiable and probably most unarguable sense, one need look no further for an answer: it’s a physical tally of male or female bodies holding “x” number of slots as editors or contributors.  That said, the question seems loaded with other interesting and more-difficult-to-assess questions. For instance, since much progressive thinking is agreed that feminine/masculine intellectual, psychological, and emotional traits, interests, talents, and propensities do not exist exclusively within physical gender but along a highly fluid continuum, can “sexist bias” be proved conclusively by a body count? What do I make of the factor that, in surveying my reading (in all genres with the possible exception of drama) over the past few years—and, I’m fairly certain, throughout six decades of a reading life—there is an observable bias toward female writers that I might estimate to be as high as two-thirds to one? Or that many male readers, on surveying the most memorable and influential fictional characters in their lives, may find more women than men? Or that in my own personal professional experience I have more often worked with, and been influenced by, female leaders rather than male? Having considered myself a stalwart male feminist, or “liberated male”, for many, many years, I am not trying to dodge the fundamental issue of numbers in this question any more than I would the incontrovertible fact that American women continue to earn on average only 78% of their male counterparts. What I am is hopeful that in assessing possible male bias in the world of literary magazines one can reasonably suggest that any conversation be rich enough to include other aspects of the question that may fit less easily into an empirical matrix.






Religious Freedom or Religious Folley?


Religious Freedom or Religious Folley?

            The Editor-in-Chief

The Religious Freedom Restoration Act passed by Congress in the 1990’s is a law that places the burden upon the government to prove that restricting someone’s freedom to practice his or her religion is necessary for the public’s protection. Under that act, a person can fight the government in court if that person believes that the government violated his or her religious rights. When the law was passed in 1993, it was meant to preserve the rights of Native Americans to continue using land that they deemed sacred for religious ceremonies, such as burials, despite government efforts to use such lands for public projects. It also protected the rights of Native Americans to use peyote in some of their rituals, despite peyote being a substance banned from use by the government. Its most relevant provision asserted that “government shall not substantially burden a person’s exercise of religion even if the burden results from a rule of general applicability,” meaning that a federal law does not need to be aimed at a particular group and can apply to the general population, but if it “substantially burdened a person’s exercise of religion” it can be set aside for that person or group. Only in cases of “compelling government interest” can the RFRA be overruled. However, it only applies to federal laws, which is why twenty states have their own versions of the law.

Now, both Indiana and Arkansas have enacted their versions of the RFRA, causing an unprecedented furor among both LBGT anti-discrimination advocates and many private businesses. Initially, defenders of the new state laws claimed that there was no difference between their states’ laws and the federal law, which had been sponsored by Democrat Chuck Schumer and passed by President Bill Clinton with strong bipartisan support (including support by then-state senator Barack Obama). Opponents within the two states cited the motivation behind the passage of these new state laws, which they claimed was the desire of conservative religious groups to find a legal reason to discriminate against gays and lesbians. They also pointed to the fact that the laws concerned the religious rights of businesses, rather than individuals, as in the federal law, and that they allowed use of the law in defense against lawsuits by private individuals, not just the government, as the federal law stipulates.

Both sides in the debate were being disingenuous. While it is true that conservative religious groups were the main supporters of these new RFRA laws, it is not true that the federal law only applies to the religious freedoms of individuals. The Supreme Court’s Hobby Lobby decision, which was about the federal RFRA made it clear that closely-held private businesses have the same religious rights as individuals with regard to the federal law. The Indiana and Arkansas laws simply reflected that decision. But that did not make the Indiana and Arkansas laws carbon copies of the federal law.  The federal RFRA still cannot be invoked except in the case of a government regulation restricting the religious rights of either a person or a private corporation, while the state laws allowed them to be used against a lawsuit brought by an individual or private group. This remained a difference between the federal and these two states’ RFRA statutes and it is an important one, since it meant that businesses which assert their right to refuse service to someone on the basis of religious freedom need not fear that that person can bring a civil lawsuit against them.

The United States prides itself on being a pluralistic society with regard to religion. Our founding fathers were particularly sensitive to the interference of the government in religious matters. We have no state-sponsored religion and all religions are equally protected under our laws and our constitution. The RFRA is part of the fabric of laws that protect our religious freedom. In France, it is illegal for school girls to wear headscarves to school, because that is a sign of affirmation of the Muslim faith, which many Frenchmen feel creates an unhealthy division within their society. No such law could be passed in the United States. Even if it were argued that having unique clothing styles among different religious groups undermines national solidarity and erodes social capital, the RFRA would not allow such laws as were passed in France to be put in place in the United States. Neither can Jews be forced to remove their kippahs or Amish to shave their beards. So before urging the government to throw out the RFRA, we need to think about the reasons it’s there.

On the other hand, a new wave of “religious freedom” seems to be sweeping across parts of the United States. Indiana and Arkansas passed their own versions of the RFRA, which extended the law in states in which gay, lesbian, bisexual and transgender individuals are not included in anti-discrimination laws—states such as Indiana and Arkansas.  Fourteen other states have had similar legislation introduced to be voted on during the next year. By itself, this is not new and can even be applauded. The Supreme Court ruled that the federal law does not apply to states, so each state must pass its own law in order to have the same protection for religious freedoms ensconced in the federal statute. Twenty states have already done so. But taking a cue from the Supreme Court’s Hobby Lobby decision, Indiana and Arkansas are now including private businesses as well as individuals in terms of the definition of “persons” practicing their religions. In addition, in these two states the new laws allow the state’s RFRA law to be used as a defense in a legal suit by an individual, rather than in a defense against a government entity, which has violated a person’s religious freedom. This last difference, which is found elsewhere only in the Texas RFRA,  further raised peoples’ fears that the law could be used to defend discriminatory practices in business (note that at the time of this writing, Governors of both states were seeking to change the laws to make them less able to be used to discriminate against anyone).

Governors in both Indiana and Arkansas, aware of the backlash their proposed laws created across the nation and within their own states’ business communities, demanded that anti-discrimination guarantees be added to their RFRA statutes. While this is not “too little, too late,” as some have claimed, that the governors had to defy their own state legislatures to force such alterations in their laws is an indictment  against the mindset of conservative religious groups.

Although discrimination against a person on the basis of his or her religion was one of the main reasons anti-discrimination laws were developed in the United States, with the gradual liberalization of attitudes toward race and sexuality, some religiously minded people have come to feel that having to accept legitimization of practices to which they feel their religious beliefs are opposed, has become a violation of their religious freedom. This has included, at first, such things as integration of the races, then interracial marriage and now contraception, abortion, and same sex marriage.

Given that there are some people who disapprove of same sex marriage, or of the use of contraception or of abortion on the basis of their religious beliefs, it certainly seems valid that no one, either the government or their fellow citizens, has the right to force them to accept such practices for themselves (it is not clear what this could even mean with regard to same sex marriage). But what about in the public sphere? Does their religious freedom extend to how we treat people in our business dealings? According to the Supreme Court it does.

But isn’t observing one’s religion a personal behavior? Even if it involves observing holidays not observed by the majority of citizens, or restricting one’s eating habits, or dressing  or speaking in a certain way (some Quakers still say “thee” and “thy”), these behaviors seem more personal than refusing to serve someone else, or providing her some types of healthcare. When does observing one’s religion slide over into imposing that religion’s  values upon someone else? Sure, there are always other places to work, or to buy wedding cakes or flowers, but that argument begs the question. And the question is—when does observing one’s religious beliefs in one’s business practices become discrimination?

In the case of the Hobby Lobby decision, although it is clearly an instance in which personal religious beliefs are being allowed to determine how a business owner treats his or her employees, there appears to be no case for claiming discrimination. Not providing contraception in a health care plan applies equally to everyone and is not a decision made on the basis of characteristics of the employee (except perhaps that it applies to women not to men, but then that’s true of lots of medical services. Medicare does not pay for medications to treat erectile dysfunction, but that’s not discriminatory against men even though the rule applies to men only.) But in the case of providing services to someone whose personal behaviors violate one’s religious beliefs, e.g. gays or lesbians, this certainly appears to be discriminatory because the same services are being provided to others.

So something is wrong. In what sense can providing a service to someone else—a service that one feels perfectly fine providing to others—compromise one’s religious rights? In what sense can refusing to provide that service to someone on the basis of his or her sexual orientation not be discrimination pure and simple? In fact it is discrimination and no one’s religious rights are being compromised by providing such services. The real reason for enacting such a law as Indiana first enacted and Arkansas almost enacted, was to send a message about conservative religious groups’ disapproval of gay, lesbian and bisexual behavior, including marriage. That’s a message that was already quite apparent. What these groups wanted to add was punishment to those in the LBGT community who want to be open about their sexual preferences and practices. The message now was “If you’re LBGT and you want to live in our community, you are going to live uncomfortably.” In other words, you’re not welcome here.

Is this the kind of message that religion teaches us to send to our fellow Americans?


Trolleys, Terrorists, Torture and Truth

Trolleys, Terrorists, Torture and Truth

Senate Select Committee on Intelligence: Committee Study of the Central Intelligence Agency's Detention and Interrogation Program

The Editor-in-Chief


Most of us are aware of the philosophical dilemma posed by what is known as the “trolley problem.”  A runaway trolley is moving along its track and there are five people tied to a stretch of track ahead. You are able to pull a switch that will divert the trolley to an alternative track, on which one person is tied. Should you pull the switch and deliberately kill the one person to save the five? Nearly seventy percent of professional philosophers say you should. Saving the lives of many is worth sacrificing the life of one.

A Department of Justice Legal Counsel agrees and  offered a similar argument to the White House in the case of defending CIA Enhanced Interrogation Techniques (EITs):


"It appears to us that under the current circumstances the necessity defense could be successfully maintained in response to an allegation of a Section 2340A violation... . Under these circumstances, a detainee may possess information that could enable the United States to prevent attacks that potentially could equal or surpass the September 11 attacks in their magnitude. Clearly, any harm that might occur during an interrogation would pale to insignificance compared to the harm avoided by preventing such an attack, which could take hundreds or thousands of lives."


So the argument in favor of using torture (EITs) is that it is necessary in order to save countless lives. Much of the debate concerning the senate report on CIA interrogation methods has focused on whether or not such EITs did, in fact, save countless lives. The evidence cited in the report strongly suggests that such techniques were not instrumental in thwarting any real terrorists plots or capturing any dangerous terrorists. Citing CIA and other law enforcement agency documents to buttress their claim that the senate report is inaccurate and that the EIT program was effective, President Bush, Vice President Cheney, and six former CIA Directors or Deputy Directors writing in the Wall Street Journal, challenged the senate report.  However, they provided no new evidence on the issue themselves.

It’s interesting that in the trolley problem, while many people would agree that diverting the trolley to kill one instead of five by pulling a switch is moral, far fewer agree that accomplishing the same outcome by pushing a fat man off a bridge over the tracks so that he stops the trolley but dies in the process, is a moral thing to do. Apparently, deliberately pushing a man to his death seems less moral than pulling a switch. Similarly, it could very well be that the type of torture involved might alter the morality of using torture to obtain information to save lives (would it be moral to bring someone’s children in front of him and kill them one by one until he gave up his information? How many people would need to be saved to make this a moral act?). Probably, most people would agree that using the least amount of torture that is necessary to obtain the information is the most moral thing to do.

So what did the CIA do? The senate report, based as it is on mostly CIA internal documents, makes it quite clear that CIA interrogators and indeed, CIA officers and administrators, did not use nor even recommend using the least egregious torture methods necessary to obtain information. Instead, many of the out-of-country detention facilities were set up to routinely institute sensory deprivation, altered diets, 24 hour loud noises, limited clothing (if any), cold temperatures, and lack of toilet facilities to all detainees as soon as they were incarcerated (some of the detention sites did not include either heating or plumbing. One detainee died of hypothermia while chained to a wall overnight without clothes). High value detainees (HVDs) such as Khalid Sheikh Mohammed were, contrary to CIA claims, not questioned at all prior to instituting torture. In the case of HVD Abu Zubaydah, even when FBI interrogators, using rapport-building methods, were extracting valuable information, they were brushed aside by the CIA who immediately resorted to coercive EITs to try to force more information (apparently unsuccessfully, by their own internal reports), from Abu Zabayhah.

And what were the EITs used by the CIA on Abu Zubaydah? Such coercive EITs included "walling, attention grasps, slapping, facial hold, stress positions, cramped confinement, white noise and sleep deprivation"—continued in "varying combinations, 24 hours a day" for 17 straight days.” He was “subjected to the waterboard 2-4 times per day.” Abu Zubaydah “spent a total of 266 hours (11 days, 2 hours) in the large (coffin size) confinement box and 29 hours in a small confinement box, which had a width of 21 inches, a depth of 2.5 feet, and a height of 2.5 feet.” During some of this time insects (of which he was deathly afraid) were placed in the box with him. Abu Zubaydah was described as "hysterical" and "distressed to the level that he was unable to effectively communicate." Waterboarding sessions "resulted in immediate fluid intake and involuntary leg, chest and arm spasms" and "hysterical pleas.” In at least one waterboarding session, Abu Zubaydah "became completely unresponsive, with bubbles rising through his open, full mouth.” He remained unresponsive until medical intervention, when he regained consciousness and expelled "copious amounts of liquid." Other detainees, including Khalid Sheikh Mohammed, were subjected to “rectal feeding” as a punishment for not giving information.

Even for those who defend torture because it saves lives, if a moral defense entails using the least amount of torture necessary to obtain the crucial information, the CIA flunked the test. And the CIA flunked even more ethical tests. Its own documents indicate that the CIA lied to the president, to congressional committees and to administration officials about the extent and the routinization of its use of EITs. They lied about the intelligence such techniques were producing. They lied to congress and to the American public about how many detainees were held in overseas detention facilities. They claimed that some of their detainees posed an imminent threat or had knowledge crucial to foiling such a threat (criteria which were required  to be met in order to satisfy the memorandum of understanding that allowed them to conduct the rendition program, which set up the foreign prisons), when they had no evidence or even a suggestion that such was the case. On more than one occasion, detention sites were established in countries without informing the U.S. ambassador to that country or deliberately lying to the ambassador about the presence of the site. And of course they destroyed over a hundred videotapes of interrogation sessions, which might have proved one way or another how harmful to their prisoners their interrogation techniques were.

So the CIA’s Detention and Interrogation Program flunks the ethics test and probably the effectiveness test. What about the administrative test of being run in an orderly fashion? The CIA we see depicted in the movies may be unethical and cruel, even conspiratorial, but it always runs like clockwork, with a relentless precision that cannot be denied by terrorists, by lone-wolf zealots, or even by Jason Bourne. The CIA depicted in the senate report operated more like a malevolent branch of the Keystone cops. No one in the organization knew exactly how many detainees were being held in the overseas program, or who they were. According to the senate report, “The CIA maintained such poor records of its detainees in Country (X) during this period that the CIA remains unable to determine the number and identity of the individuals it detained.”

Many of those involved in the program were either inexperienced or had questionable service records, or both. When the CIA instituted a training program for interrogators, they refused an offer by their own counsel to oversee selection of trainees … an offer that was made on the basis of qualms expressed by the counsel regarding those selected for training.  When the Senate Committee reviewed CIA records related to several CIA officers and contractors involved in the CIA's Detention and Interrogation Program, most of whom conducted interrogations, they  identified “a number of personnel whose backgrounds include notable derogatory information calling into question their eligibility for employment, their access to classified information, and their participation in CIA interrogation activities. In nearly all cases, the derogatory information was known to the CIA prior to the assignment of the CIA officers to the Detention and Interrogation Program. This group of officers included individuals who, among other issues, had engaged in inappropriate detainee interrogations, had workplace anger management issues, and had reportedly admitted to sexual assault.”

And then there were the two psychologists, whose use of “learned helplessness” research with animals led to their theories of how to create helpless dependency upon the interrogators by breaking down the detainees’ ability to control the almost constant punishments issuing from their prison environment. Neither of the two psychologists who were the architects of much of the torture program had had experience as interrogators or in real-life detainee situations. However, they were not only given the task of evaluating the mental status of the detainees, but of prescribing and, often in the case of waterboarding, carrying out the torture (for which they received $1800 extra pay per session). The two, who were consultants rather than CIA employees, became so central to the EIT program that they formed a company to supply guards, interrogators and expertise to the CIA overseas prisons and obtained a $180 million contract with the CIA, about half of which they were able to collect before their contract was suspended by Obama’s termination of the program. However, they are insured legal assistance from the CIA until 2021 in the case of any future lawsuits or criminal prosecutions against them.

Immoral, ineffective and inept. That seems the best way to characterize the CIA overseas Detention and Interrogation Program. The argument is often made that torture shouldn’t be used because it doesn’t work. While that has been the conclusion of most experts, including the CIA itself in the past, the evidence is inconclusive due to the understandable dearth of well-controlled and conducted studies. In this case, the ineffectiveness of the EIT program was due to the same information being available from other sources or from the same sources when questioned without torture (usually by the FBI or when in the custody of another country), or the information that was forthcoming under torture was fabricated. In many cases, the detainee simply didn’t have any information to provide. And if a prisoner is subjected to torture before being questioned how is one ever to know what he would have said had he not been tortured? According to the FBI interrogators of Abu Zubayah, rapport building, including even holding the detainees hand when he was in pain while recovering from wounds he had gotten when he was captured, produced much more useful information than torture. But we’ll probably never know the answer to this question of the effectiveness of torture separate from the confounding conditions in which it has been applied, such as these CIA operations. What we do know is that applying severe torture without first attempting less severe torture or no torture at all is immoral. Lying to the government and the people who support your organization is immoral. Running a program that deals with people’s lives—both detainees and the innocent civilians they seek to harm— with extreme sloppiness and inefficiency is immoral.

The CIA is guilty on all counts.



The Pushcart Prize

The Pushcart Prize

The Editors


The Pushcart Prize, which claims to be “the most honored literary project in America,” each year publishes its choices of the best of poetry, fiction, essays, memoirs or excerpts from novels published by the world’s small presses. The founding editors, which included Joyce Carol Oates, Paul Bowles and Ralph Ellison have been followed by a virtual who’s who of letters on the Pushcart Prize Fellowship’s advisory board. The collection of approximately sixty selections, dubbed, “best of the small presses” is routinely received with worldwide praise. Its publication has been called,  "A distinguished annual literary event" by the New York Times.

For the first time in our five years of publication, Lost Coast Review is submitting nominations for the annual Pushcart Prize. As a small literary magazine we are allowed to submit up to six nominees and we have selected two poems and four short stories from our 2014 issues, which includes the current one. It was a difficult decision for us to choose among the many quality stories and  poems published this year and our selections were not always unanimous among the editors. By selecting only two poems, we surely neglected several others that deserved nomination. Be that as it may, below are our nominees:




Autumn Song by Anne Britting Oleson. Vol 5, No. 2, Winter 2014.  This remarkable short poem is filled with beautiful images of the changing of seasons and the sadness of approaching winter as it descends upon a mountain farm and forest. 


Dangers of Suburbia by Michael Mark. Vol. 5, No. 4, Summer 2014. In a poem filled with irony, we watch a family search for their lost dog, putting up posters, praying for his return, as we observe that the flight from the bustle of the city to the quiet of the suburbs, where coyotes still roam at night preying on family pets, has its own dangers.


Short Stories:


Billy Penn’s Hat by Brian Patrick Heston. Vol 5, No. 2, Winter 2104. Billy Penn’s Hat captures the daily desperation of a man, Sam Thompson, down on his luck, enslaved to drink. Sam earns a living in costume, doing impersonations of William Penn in the Philadelphia park dedicated to the Quaker founder of the colony of Pennsylvania. Sam’s job is to pass out flyers for the Old City Tavern across the street from the park. Despite his nearly constant inebriation, Sam is good at his job. He knows Penn’s life story, is able to embellish it to attract a crowd, and enjoys reflecting on the historic personage’s life. But his head is also filled with visions of his lost love, Liz, on whom he walked out, leaving her alone with their child, many years before. Sam spies a young woman who reminds him of Liz when she was young, provoking nostalgic reveries and leading him to drink even more, which succeeds in getting him fired from his job. Broke, drunk, and lying on the street, he wakes up to find a crowd of college students putting money in his William Penn hat and urging him to give his impersonation. At first derisive, they succumb to the fascination engendered by his eloquence on his favorite topic and put more money in his hat.

This is a poignant tale, told with humor, insight, and tenderness. The author, Brian Patrick Heston, is able to bring the middle-aged, dissolute Sam Thompson to life as a soulful, intelligent, caring human, hopelessly ensnared by his addiction to alcohol. While we watch Sam drink away what appears to be his last chance at any kind of employment, we fear for his future.  But fate and Sam’s artistic nature intervene to save him, at least temporarily—fate and Billy Penn’s Hat


Soft Ice Cream by Bruce Colbert. Vol. 5, No. 3, Spring 2014. Bruce Colbert is the author of the recently published collection of short stories, A Tree on the Rift (Lummox Press, 2014). Lost Coast Review was lucky enough to publish his short story, Soft Ice Cream in our Spring 2014 issue. Told from the point of view of the main character’s friend, a man recently separated and still feeling the pangs of loss associated with demise of his marriage, it is a tale of Scotty, the friend and business partner who handles his own twice-divorced status with the panache of a man successfully outrunning the law. Scotty tells his partner a tale of how, while one day enjoying a soft ice cream cone, he picked up a striking young woman and successfully bedded her that afternoon at his beach home, only to find that she then demanded $500 or else would accuse him of rape. Using his previously combat tested (in Vietnam) wits, Scotty successfully threatens her with arrest by his building’s security guards on charges of extortion. It is a story told with rollicking good humor, delightful irony and enough suspense to entice the reader right to the very end. All of these features made Soft Ice Cream a natural to be one of our Pushcart nominees.


The Prettiest Girls in Roseburg by Bruce Douglas Reeves. Vol. 5, No. 4, Summer 2014. Bruce Douglas Reeves introduces Harry White, “the kid everybody forgot,” who is fixated on twin sisters Phyllis and Charlotte Gerber—‘The Prettiest Girls in Roseburg.” Harry is a dramatic portrayal of deepening obsession, compulsion and delusion. Invisible to the “perfect” twins whom he fervently follows and defines, with impenetrable certainty he sees them in all their beauty as no one else can, but he also imagines that when they observe him, they see others whom they would like more. From within the idealization of his “love” for them, Harry at first watches both of the “perfect” girls from afar, and believed that “in his own way he owned them.” Devoid of their realness, and in contrast to their popularity, beauty, and visibility, inwardly he “kidnapped some of their realness for himself… even though he knew he’d never be part of those lives.”

In a town where, “Only beautiful people, talented people, rich people existed,” where “Harry was none of those…” he stalks and seeks to possess Phyllis, the twin whom he loved more than her sister, Charlotte. He exists within the pathetic and unresolvable conflict of his own isolated, polarized, doubling images and actions, places from where he strategizes the fulfillment of his wish to possess and merge with Phyllis, the idealized object of his love. After his mother dies of cancer, an absence which “didn’t make any difference in his life,”and his father kicks him out, in the  passing years of Harry’s silent, isolated pursuit of Phyllis, he works at menial, back room jobs, enduring rejecting bosses, barely able to support himself in his clandestine quest for Phyllis.

Harry wanders through the cycles of Phyllis’ high school and college, her failure in competition for Miss California, her brief sorority life, her dating and sexual experiences, her dropping out of  college and disappearance, from Roseburg “living with some guy in San Francisco.” Then years later, in a darkened theatre, he overhears Charlotte whisper to a friend that Phyllis was a stewardess flying round trips to and from Viet Nam. Harry has become a mere voyeur, spying on Phyllis from his hiding places, from bushes near her doorways, remembering a young man who years before had embraced Phyllis and, in his apparent passion had ejaculated on her dress, an event that Harry later rapaciously imitates on a San Francisco night when, after stalking her, he forces Phyllis into her apartment, bolts the door behind her, strips her and, in his delusions believing that she loves and desires him as he has for years loved her, reassures her that he would not hurt her. Harry commands her not to scream, “ ’cause nobody’ll come,” because, “People never come.” Then he “hurtled downstairs into the fog.” Two weeks later, after he finds that Phyllis’ apartment is empty, he is told by the building manager to “get lost.” Harry ends prowling the high school and all the places where the twins and “the Homecoming Parade had breezed its way into his memory.” Harry is last viewed “waiting, hoping,” believing that, “They would come back, and when they did, he’d be there.” Thus the child of obsession, compulsion and delusion comes full circle, left alone in despair. A beautiful, complex story, worthy of our nomination.


Petite Suite Cybernetique by Robert Wexelblatt. Vol. 6, No. 1, Fall 2014 (this issue). The first in this complex of three scenarios  about the internet contains a running series of posthumous blog entries and accompanying reflections, beginning with commentaries by a “self sufficient spinster” grieving over Charles, who had wanted to marry her, an invitation she rejected before he died of an infarction—a  fatal heart attack that “ripped a hole in what I’d thought of as the unbreakable fabric of my life.” She is surprised by the contrast between the deceased Charles’ previously solid, even stolid, down to earth, uninspired presentation on the one hand and, on the other hand, the Charles revealed in these newly uncovered blogs wherein he has supposedly written to her, exploring emotion, relationship and the meaning of living and dying. His blogs and the blogs of others, which accompany her reactions also, become a kind of chat room, a cybernetic litany exposing contrasts between virtual reality and the existential world of authentic, intimate human experience. The ensuing blogs display a seemingly safe internet haven for someone such as she who, wishing to avoid the personal demands of living life fully, is afraid to surrender herself to someone else. She soon discovers that blog entries, which she initially thought were written to her from Charles, in fact are both to and from others. They represent the opposite of the apparent Charles she thought she knew. This newly blogged “Charles” continues his invitation to leave her net of safety, to explore an unknown life of unconstrained emotions. As before Charles’ death, she retreats from the invitation.

         There are three sections in these “chats.” A conspiracy theory of a “Rudolph” and his internet circle in piece #2 is a self-justifying haven away from a lived reality that has an irresistible attraction to Rudolph, which raises a question, not raised in section #1, of the definition of sanity in a world of the esoteric and indefinite, wherein “conspiracies” actually may be quite real in life-as-lived. In piece #3 the internet interlocutors explore the advantages of restricting interaction to an idealized, controlled mode of sharing and being defined by only that which one wants to share—a control generated and sustained by limiting communication to internet posts. In all three pieces the characters opt to withdraw from real person-to-person interactions, each for different reasons, all demonstrating the attraction of the internet as an alternative mode of interface. In his submission, Wexelblatt noted that his story is a “hybrid literary/musical form…a common theme in different movements, like the suites of various French composers, pre-eminently Debussy.” As with the work of Debussy, the story presents an exploitation of dissonance—irregular and fragmented “floating chords” that have no resolution, a shading of innovative harmonies typical of free verse, jazz or the indefinite, esoteric, even mysterious impressions made on the mind by combinations from one color, as with the works by French Symbolists. Thus a loveless woman, despite overtures that might contain potentia a realized élan vitale, continues to take residence in an avoidant cyber distance (a condition corresponding to her drift from grief over the loss of Charles and the loss of self contained in her own unrequited love). Despite the invisible guests who chat voicelessly from the blogosphere, all are finally unconsummated, mere virtual bodies, barely wishes, post human cybernetica, shades of sadness, silent voids cut off from love—wandering, but lost to the meaning of time, memory and the real, mere floating shades.


We are excited and honored to have been able to submit these six beautiful examples of writing as our 2014 Pushcart nominees.



E-Books, Print, and Prices

E-books, Print and Prices



Readers, writers, even publishers, especially Indie publishers (which in today’s world is often a cover word for self-publishers who have formed their own publishing imprints), are mystified by the ongoing battle between Amazon and Hachette—either mystified or consumed by strong opinion. There seem to be two real issues: first, the question of who can determine the price of e-books, the publisher or the retailer, second, the type of tactics that it is permissible for a retailer to use to bring a publisher into line.

Hachette wants to charge more for e-books from its most prominent authors. Amazon wants to limit e-book prices, making the range of prices smaller and lower. The two companies have to negotiate an agreement and when Hachette has refused to give in to Amazon’s demands, the latter company made Hachette’s titles harder to order on Amazon’s website, either delaying shipment, prohibiting pre-publication orders, or occasionally removing “buy” buttons from titles.

Amazon is able to sell e-books at a loss in order to build up its customer base and compete with other sellers. Doing so cuts into profits for the publishers. Amazon’s argument is that by lowering prices it will sell more books and publishers will still make plenty of money.

Most Indie publishers and self-published writers are on Amazon’s side. For most self-published authors, the goal is to get their books in the hands of as many people as possible. Low prices, even free giveaways, are prime strategies for doing so. Large publishing companies rely on established names, whose books will sell because they have a waiting fan base, which is willing to pay higher prices for its favorite authors.

Without getting into other issues, such as the discounts publishers pay Amazon for advertising their books, or the long-term implications of having one retailer have a monopoly within the book industry, why is this controversy important to anyone other than the big publishers and their best-selling authors? They will make less money, but the bulk of writers and Indie publishers will be unaffected, won’t they?

Amazon owns the lion’s share of the book-selling market, both for print and e-books, somewhere around 65% in each case. Print books still outsell e-books but the numbers are deceiving and perhaps inaccurate (since most surveys don’t include Indie and self-published books), and e-books have surpassed print in the fiction category, especially genre fiction, where Indie and self-published authors now sell more books than authors published by the “Big Five” publishing companies. In fact, Indie and self-published authors actually make more money, not just because there are more of them, all making small amounts, but in nearly all income categories, even the $100,000-$250,000/year range, there are more Indie authors than Big Five authors.

There are lots of platforms available for self-publishing or even real Indie publishing using print-on-demand format and even more for e-book publishing. Amazon’s Kindle and its website have revolutionized sales and its Createspace platform has allowed countless authors to enter the market. Authors who have gone through the agonizing and painful process of being rejected by literally hundreds of agents and publishing houses are rightfully thankful to Amazon for allowing them to finally emerge into print. That is overwhelmingly why such authors are on Amazon’s side in this controversy.

But will lowering the price of best-selling authors help Indie and self-published authors? Probably not. To some extent the success of such authors has been because they have underpriced their works compared to those published by the Big Five. Lowering the price of the latter authors’ works will make them more competitive with Indie and self-published authors’ books.

As  readers, we can all hope that book prices are lowered. Lowering e-book prices further than they now are will perhaps hasten the end of print publishing, at least for fiction titles, the print prices for which are now out of reach for many readers. From most current readers’ and authors’ points of view, this would not be good, but who knows what a younger generation of readers, raised on digital media only, will want or care about? Bookstore owners may soon be in the antique business.

Authors don’t lose when readers choose e-format over print. E-book royalties can be, and usually are higher than print royalties, and sales numbers can be larger.

So for all but a few, the lowering of e-book prices is a good thing.  Those of us who love print books and roaming through book stores and libraries filled with real books, can bemoan the ascendance of the e-book, but like luxury cars, fine wines, craft beers, and movie theaters (and now even drive-in theaters), something that is neither cheap nor functional can survive if there is a market for it in our society.  And as in each of these examples, prices that began high, gradually lowered, as the audience base for them grew.

Even if lower e-book prices are a good thing, there is no reason to force the Big Five to lower the prices for their best-selling authors. Market trends are going to force them to do that anyway. And draconian tactics to force the Big Five to kowtow to Amazon’s demands, especially when such tactics penalize both authors and readers, are shameful and help no one.



A Distant View of the Middle East

A Distant View of the Middle East



Does the current chaos in the Middle East (Syria, Lebanon, Iraq, Egypt, Libya) represent the waning of American influence in the world, or perhaps failed opportunities by a U.S. administration that has refused to take a leadership role in conflicts beyond its borders, or simply era-specific developments unique to a part of the world that is struggling with the transition from autocratic to democratic rule? Or perhaps it represents none of the above—or perhaps a little of each? Certainly things are confusing to us in that part of the world. The civil war in Syria may be the most confusing. Bashar al-Assad, a leader whose religious affiliation is with the Alawite sect, an offshoot of Shiite Muslims, is a tyrant who uses his military to rein in anyone in his country who challenges his policies. His opponents are mostly Sunni Muslims, who appear to have begun as protesters and freedom fighters but many of whom  have now become champions of fundamentalist Islam as an alternative to the current regime. They are supported by Saudi Arabia and are allied with al Qaeda, both within  and without the country and particularly across the border in Iraq. Some of them want to establish an independent fundamentalist Islamic state occupying parts of both countries. The most prominent of these, the Islamic State in Iraq and Syria, or ISIS, apparently uses torture and execution techniques on civilians that are no different than the worst used by the Assad regime, which it fights.

Within Iraq, the Shiite leadership of Nouri al-Maliki has limited the voice of  minority Sunni Muslims to the point that Sunni-oriented al Qaeda in Iraq has resurfaced and taken over part of the country. The U.S. is faced with the quandary of how much assistance to provide to the al-Maliki government to suppress the Sunni uprising, since the latter is largely fed by fundamentalist al-Queda influence.

For the last several years, the United States has labeled Shiite Iran as its principle threat in the Middle East. Terrorist activities have been laid at the doorstep of Hezbollah, an Iran-financed Lebanese group of Shiite Muslims, whose principle target has been Israel, but who are now fighting in Syria in support of the Assad government.

Americans, and particularly Americans who make foreign policy decisions, have not understood Middle Easterners. A recent study, which found that Tunisians were among the most liberal of the Middle Eastern citizenry, second only to Turks, showed that Tunisians viewed themselves as Muslims first and Tunisians second. They favored freedom of expression, but not if it included criticism of their religion (their new constitution, heralded as a model for the Middle East, includes the phrase that it is the state’s duty to “protect the sacred,” referring to Islam). At the same time, they were largely tolerant of other religions. They overwhelmingly favored arranged marriages and only about half felt that women, whose rights to hold office are now guaranteed in the constitution,  should be allowed to dress as they wanted. They valued American friendship, wanted American tourism, but were in favor of American defeat in Afghanistan, where they viewed the American effort as an attack on Sunni Muslims. They had stronger faith in their army than in their government. Most of their views were echoed, either strongly or weakly, throughout the Middle East.

Into what American political storyline do the above findings fit? The answer is none. Our views of the Middle East are too simplistic and culture-centric to encompass complexities that we don’t understand, but which make perfect sense to the inhabitants of Syria, Tunisia and Egypt. Americans tend to think in terms of being either for us or against us, of being in favor of free-speech or against it, of separating religion and politics, and of human rights which include equal treatment of everyone with no regard for gender, sexual orientation, religion, ethnicity, or social status. These are not the terms that determine Middle Eastern sympathies.

Fareed Zakaria, host of Fareed Zakaria, GPS on CNN, has recently argued that it has not been  Obama’s inactivity that has created the disasters of Syria and Iraq, but rather it was Bush’s activity. Bush virtually took over Iraq with no appreciation of sectarian divisions within the country. He rescinded the power of the Sunni minority through a policy of de-Baathification, without realizing that he was installing a tyrannical Shiite majority as its substitute and fueling fires of a Sunni uprising. In Syria, Obama has been accused by Senators McCain and Graham of having caused the al-Qaeda takeover of the revolution against Assad because we didn’t jump in and support the “friendly” insurgents soon enough. But recent New York Times reports of the Benghazi, Libya attack on the U.S Embassy indicate that it was the “friendly” insurgents against Gadhafi, whom we armed, who led the attack against our installation.

To say that the Middle Eastern situation is murky is an understatement. What should the U.S. do in such a case? The answer is simpler than it would appear to be.

First, don’t arm anybody. We have multiple instances of evidence from that of Afghanistan rebels, armed and trained by the CIA, morphing into al-Qaeda, to the latest exposé of what really happened in Benghazi, where the attackers used weapons that had been supplied to our rebel allies, to recent reports that Boko Haram, which kidnaps innocent schoolgirls, is also using weapons supplied to Libyan rebels, which support the idea that our own weapons almost invariably end up being used against us. Offering food, water and medicines is fine but adding more weapons to this conflict is a definite mistake.

Second, don’t impose Western values upon Middle Eastern cultures. As a country that separates church and state to the point of forbidding Christmas carols being sung on government property, we cannot begin to understand cultures that permit freedom of expression but not criticism of religion or who put more faith in their military’s ability to run their country than they do in elected officials. Go to war to protect women’s rights in a Muslim country? We don’t have a clue what rights are important to Middle Eastern women or how their thinking fits into their own country’s culture.

Third, admit we don’t understand the “Arab Spring” (now fall, winter, summer, and on and on) and what it means to those who are engaged in it. There is no opportunity to highjack it for our own ends—period. We don’t know what it is about and we’ll have to wait and see. We can pontificate about whether Arab countries are “ready” for democracy, but that’s mostly just Western (and basically White) prejudice; a haughty sense that the “dusky” races (or non-Christian countries), don’t really have the wherewithal to govern themselves democratically. Democracy can take many forms and ours in the U.S. is just one of them. Equitable governance can take many forms and we’re not even sure that we know how to achieve such a thing in the U.S. So what happens if we heed all of this advice and keep our distance?


We don’t know.






Income Inequality

Casey Dorman, Editor-in-Chief, Lost Coast Review


There seems no doubt that income inequality is increasing in the United States. Paradoxically, global income inequality is decreasing although it certainly persists, as pointed out recently by Pope Francis. But, as Charles Kenny reported in Bloomberg Business Week last December, “worldwide, consumption for the median inhabitant has increased about 80 percent [over the last ten year period], compared to around a 60 percent increase for the world’s highest-spending 1 percent.” So worldwide, incomes have become more, not less equal. This is primarily due to remarkable gains for the middle-class in China and India, where the economies have taken off in recent years, partly due to the consumption of the affluent nations, such as the U.S. who value these countries’ cheap products. Some have seen this as a validation for the “trickle-down” theory of economics, since the spending of the rich has profited the income of the poor.

Despite these worldwide gains, income disparity has increased in the U.S. and the U.S. has, as measured by either the Gini coefficient or Palma ratio, the largest gap between rich and poor of any highly developed country. Three questions arise from this fact: 1) Why is this? 2) Is it bad? 3) If it is bad, what can be done about it?

With regard to the reasons that income inequality has increased in the U.S., there are multiple reasons. A study by the National Bureau of Economic Research conducted in 2008 found that reasons for increasing income disparity included the failure of low-earning women’s salaries to rise—salaries which were more likely to be at minimum wage levels than were men’s. Other reasons were health issues and shorter life expectancy among the poor compared to the wealthy. Increased immigration of low wage earners lowered the wages of high-school dropouts and previous low-earning immigrants, but did not affect native-born high school graduates. At the top end of the wage scale, salaries of the top 10% of earners, and particularly the top 5%—CEOs and sports and media superstars and some professionals—have soared over the last 40 years, far outstripping the increases in salaries seen in the bottom 90%.

So in the United States the rich have gotten richer while those in the middle have stayed the same and those at the bottom have gotten relatively poorer. What’s wrong with that? If those at the bottom are still able to live comfortably, then maybe there is nothing wrong with it. But they are not. A forty-hour a week salary at $7.25/hr., the current federal minimum wage, equals $290 per week or $1305 per month before taxes are taken out. And many of the poor are not full-time employed. In Orange County, California, where I live, the average rent is $1671 per month. California has a minimum wage that, at $8.00/hr.,  is higher than the federal wage, but that still translates into only $1440/month, more than $200 less than the average rent. We have government programs to assist people at such low incomes, but, before the Affordable Care Act, such a salary would not have qualified a family of two for MediCaid; they would have received an earned income credit of only $140;  they would not have qualified for food stamps. These things would change if one of the two was a child, but then there would also be child-care expenses. For instance if one of the two was a child and the other was his or her mother, and the child had child-care expenses of $450 per month, the two of them would be eligible for approximately $320 in food stamp benefits per month (which may be reduced by $90 in the near future if Congress adds such a reduction to the Farm Bill) and the child, but not the mother, might qualify for some kind of health care or discounted health care (after the Affordable Care Act both mother and child would qualify for public health care). It is difficult to see how such a single mother and her child would survive without having to either share an apartment, live with a relative, etc., any of which would lower the eligibility for either earned income credits or food stamps.

So the plight of the poor in America is bad. What can be done about it? A higher minimum wage would help. California is scheduled to raise their minimum wage to $9.00/hr. this year and to $10.00/hr. next year. There are proposals in Congress to raise the federal minimum wage also. David Brooks has recently argued that increasing the minimum wage would only help a minority of families in poverty because the majority of minimum wage earners are third or fourth members of families who are above the poverty line. He cites one study  in support of his position, but other studies have challenged that finding. In fact, the Economic Policy Institute has recently shown that over half of  those earning a minimum wage come from families with a total income of less than $40,000 per year—200% of the federal poverty level for a family of 3—not poverty level, but certainly in the lower income range. Raising the income ceiling for food stamps would also help and providing MediCaid to a larger group with higher incomes will help. These are all government interventions. Republicans oppose such measures and prefer to reduce taxes on those who pay the most taxes, i.e. those who are wealthy, in the belief that they will spend more and invest more and grow the economy so that more people have better paying jobs.

A greater number of better paying jobs will help the middle class, who have the education and training to take advantage of them, although, middle class wages have not increased, relative to inflation, for the past forty years while high incomes have. But at the bottom of the work ladder, those low-paying, minimum wage jobs are surely not going to profit from the increasing wealth of the income elite. Many of the low-paying jobs in our country are part-time. They are part-time because employers don’t want to pay the benefits they would have to pay to full-time workers. Many fast-food  and retail chains are guilty of this practice, which in their belief increases their profits by decreasing their costs (Walmart has recently found that this doesn’t work so well and after a well-publicized reduction of workers to part-time status to avoid Affordable Care Act health insurance requirements, has brought 35,000 workers back to full-time status because of the inefficiency such a reduction caused in their store operations).

A minimum wage should be a livable wage, which at present it is not. Additionally, since life at the bottom of the wage scale is never going to be very good, we need change our society in such a way that those who are born into the bottom socioeconomic rung of society have a way to advance above that status. The answer to that is better education, affordable health care, and training for technical jobs that can be carried out by well-trained persons whose academic, as opposed to technical education need not be above a high school level. Charles Kenny has pointed out that some of the upward movement of the poor in third-world countries has been due to a focus upon education. “Primary school enrollments in Sub-Saharan Africa have risen from 70 percent in 1990 to 100 percent today. Secondary enrollment in the region has climbed from 22 percent to 41 percent over the past 21 years.” But the U.S education system is woefully inadequate, especially for those who live in poor communities where the tax base is weak and the cultural context makes going to school a dangerous and often seemingly pointless endeavor.

The working poor also need political power. A Cesar Chavez figure is needed to champion the low-paid fast-food and retail workers, someone with the charisma and legitimacy to lead a nationwide boycott or some such action that will bring the giants of the food and retail industry to the negotiating table to agree on higher, livable wages. In New York City, the new mayor, Bill de Blasio has called the wage practices of fast-food chains in the city “an unsupportable, situation” and has called for city action to require an increase in wages.

A newly published study by researchers at the University of California, Berkeley and the University of Illinois at Urbana-Champaign found that: “More than half (52 percent) of the families of front-line fast-food workers are enrolled in one or more public programs, compared to 25 percent of the workforce as a whole. The cost of public assistance to families of workers in the fast-food industry is nearly $7 billion per year. At an average of $3.9 billion per year, spending on Medicaid and the Children’s Health Insurance Program (CHIP) accounts for more than half of these costs. Due to low earnings, fast-food workers’ families also receive an annual average of $1.04 billion in food stamp benefits and $1.91 billion in Earned Income Tax Credit payments. People working in fast-food jobs are more likely to live in or near poverty. One in five families with a member holding a fast-food job has an income below the poverty line, and 43 percent have an income two times the federal poverty level or less. Even full-time hours are not enough to compensate for low wages. The families of more than half of the fast-food workers employed 40 or more hours per week are enrolled in public assistance programs.” As Forbes Magazine noted, all these things were true while the ten largest players in the fast-food industry “made a cumulative $7.4 billion in profits in 2012, paying out an additional $7.7 billion in dividends and buybacks to shareholders.”

The rich are getting richer, but the middle-class, the group to whom all politicians seem to speak, is doing OK. It is the poor in America who are falling further and further behind. There is no indication that wealthy or corporate America has any inclination or plan to fix this problem. The burden falls upon the rest of us to support government programs that increase wages and to end our support of greedy businesses that fail to pay their workers enough money to live on.







When Democracy Stops Working


The recent circus in Washington, D.C.  over the Continuing Budget Resolution and the Debt Ceiling has served to highlight the inability of our elected politicians, from the president down to the lowliest freshman congressman, to solve the country’s political problems.


I admit that the fight should never have happened. In the first place, Congress should have worked out budget agreements that would be suitable for both parties long ago. The blame goes to both parties. Democrats in the Senate failed to pass a  budget for four years and when they did, Republicans refused to continue funding the government as the budget expired, unless Obamacare was defunded or delayed. Talks that should have occurred over a new budget did not take place. Secondly, there should be no “debt ceiling.” We have to increase our debt limit so long as we are still servicing a large debt and have not balanced the federal budget. Other countries in similar situations, simply keep borrowing more money and debate over the budget, not over a ceiling on their debt limit (admittedly, some of our European friends have gotten themselves over their heads in debt).


OK, so the debates should never have occurred. But they did. And neither side has distinguished itself as either smart or noble in the show in Washington. Both sides have asserted their point and then refused to move beyond it prior to any discussion with the other side.


The result has been a total breakdown of governmental function. Politicians are supposed to be smart negotiators… aren’t they? Watch Steven Spielberg’s Lincoln and see how Honest Abe did it (how many of us hoped that Barack Obama would watch this movie and learn something). Make Lyndon Johnson your modern role model. Ask Bill Clinton how he dealt with Newt Gingrich. Or vice-versa.


We elect our representatives to work with each other and make our government function. If they do not do this, they are not putting any of our wishes into effect, unless we are anarchists and don’t want our government to function at all.


Again, we must remind ourselves that everyone in Washington is at fault here. No one has extended a hand to the other side. I can’t help but think that the way each side is approaching the issue of the budget in Washington is the way America is approaching its role in world politics—do whatever we want and refuse to talk about it with anyone else. This is an ominous sign of  American hubris. The lesson here is that if you take a stance, based upon absolute certainty that you are right and refuse to negotiate because of that certainty, you are risking becoming estranged from both the view of those to whom you, in theory, report (i.e. the citizens),  as well as to  reality.


Many of us feel certainty. But  because we are all human and subject to our native and environmentally produced biases, our certainty is no guarantee that we are right. We are a social species and despite Alexander the Great, Genghis Khan, and other great figures in history, real progress of our species and culture has been the product of social structures that supported wholesale cultural movement because of a shared view of the goals of our civilization, not the implementation of the narrow vision of a single person or a small minority. We need to develop such a shared view of what our American civilization is supposed to look like. Our politicians have an obligation both to lead us and to talk to each other in furthering such a shared view.


The Editor





The Creation of Culture


On This Week, with George Stephanopoulos conservative columnist George Will criticized the suggestion that state or federal help in investing in Detroit could pull the city out of its economic crisis, claiming that federal assistance, in his words, “[c]an’t solve the problems, because their problems are cultural.”

According to Will, “You have a city, 139 square miles, you can graze cattle in vast portions of it, dangerous herds of feral dogs roam in there. Three percent of fourth graders reading at the national math (sic) standards, 47 percent of Detroit residents are functionally illiterate, 79 percent of Detroit children are born to unmarried mothers. They don’t have a fiscal problem, they have a cultural collapse.”

Shrinking from a population of around 2 million in the mid-nineteen-sixties to barely over 700,000 currently, 83% of whom are African-American, Detroit has dropped from the United States’ fourth largest city to its eighteenth. Only two auto plants remain in the city. The median household income of $27,000 in Detroit is approximately half of that in the state of Michigan as a whole, which itself ranks 34th among the states. The jobless rate is close to 30% and the poverty rate is nearly 40%. A quarter of the population never graduated from high school and only 12% have college degrees. The police force of 2,700 officers is down from nearly 4,000 ten years ago. The  rate of violent crime  in Detroit is the highest in the country among large cities.

The arguments will continue as to whether Detroit represents the future of the American city, or at least the American industrial city (Flint, Pittsburgh, Buffalo, Toledo, Chicago, Milwaukee, Akron) of the old Rust Belt. At the least it represents a symptom of the woes of big American cities in which manufacturing has left—not just jobs, but the all-important taxes from the employers— and from which affluent residents, and their property taxes,  have fled to the suburbs.

George Will is seeking to affix blame for Detroit’s woes and he clearly blames its citizens. That seems to me to be blaming the victims. I’m not sure who is to blame for Detroit’s circumstances, although fingers have been pointed in enough directions (politicians, the auto industry, city workers and pensioners) that the fault is surely multi-determined. One thing is sure however: Detroit represents a cultural failure.

A recent article in Boston Review by Jess Row examined the focus of many white writers in America (Richard Ford being a prime example) on seeking a safe, suburban or exurban community where matters of identity, conscience, existence, etc. could be examined against a backdrop  of what for those writers is traditional (White) American culture. The plight or even existence of a Black or Hispanic urban population has been studiously ignored by these writers… as it has by the rest of White America.

The cultural problem we live with in this country is a division between rich and poor, which often coincides with a division between White and Black or Brown. Those who are not doing well in our society; those who are undereducated, un- or underemployed, those who are not safe in their neighborhoods, those who require government assistance to have enough to eat, are unrepresented and uncared for. Not only do the country’s intellectuals ignore them, but so do the politicians, who rant about the “middle-class” but ignore the truly poor, who preach the dire consequences of the country borrowing more money than it can easily repay but ignore those who haven’t got enough money to live, who proclaim “family values” but ignore social circumstances that destroy families, who serve the super-rich—the group that has increased its relative income by nearly 20% while the rest of America has remained stagnant or drifted backwards—and who are more concerned with abortion issues, the profits of health insurance companies, the defeat of fundamentalist Islam, than the fabric of American culture… all of America and all of its culture.

George Will is right that Detroit represents a failure of culture, but it is the culture of which he is a  part which has failed, not the culture of the victims of that failure. (For a glimpse of the culture which exists in the Detroit area—actually Flint, MI—read Randall Mawer’s review of Gordon Young’s  Teardown in this  issue of Lost Coast Review)

The Editor







Mental Health Recovery and the Arts

By Richard Krzyzanowski

Most people will seize an opportunity to talk about themselves, as we tend to be our favorite subjects. Of course, we all tend to edit our material to some extent, highlighting the flattering and omitting that which we feel others may not approve of or understand.  Many of us – somewhere between one quarter to one fifth of us – are reluctant to open that door at all, however, because of where the process of disclosure might lead: We are people with a personal experience of mental health challenges, a subject so touchy that often not only do we avoid discussing our lives with others, but sometimes hesitate to even think about it ourselves.

This is the face of stigma, whether public stigma or its internalized brother, self-stigma, and the damage it can do is very real, from lost opportunities in society to lost self-esteem to the radical attempt to escape this painful situation entirely through suicide.

The most productive strategies used to fight stigma and its impact on lives involve using the truth to give perspective: We are not defined by our diagnoses or our illnesses; mental health issues touch most of us at some point in our lives, either directly or through someone we know and may care about; there is life after and with such challenges, which we call “recovery,” a powerful concept precisely because we had all been told for years that it was not possible.

Once such doors are opened, we now have the room to get on with life -- even introduce a celebration of those parts of us that have nothing to do our “conditions.” Engaging our creativity through the arts has played an especially significant role in helping people find a positive identity in society and for ourselves.

Society honors and venerates the artist, whatever their art or craft -- visual, dramatic, musical, literary, etc. When a person who manages a mental health issue is transformed into an artist, the mental health side of things gets cut down to size, in the eyes of others as well as in the mirror, and the result can be wonderfully liberating and exhilarating. Certainly, the mental health issues don’t go away, but they become appropriately contextualized as what they should be: Just another of the challenges that life presents to us all in one way or another, and just another facet in the complex internal diversity that uniquely defines us all.


Richard Krzyzanowski is a former career journalist who came to the mental health field more than ten years ago, following his own experience of a mental health crisis. He currently works as a trainer and organizer—primarily within the mental health "client" community—and is well-known for his advocacy as a member of several state-level committees, including his role as chairman of the California Client Action Workgroup.







The Black President’s Burden

One of the most insightful—and devastating—descriptions of the Western World’s influence on Africa is Walter Rodney’s 1972 How Europe Underdeveloped Africa. Rodney’s searing accusations against Western governmental and corporate greed, and their role in keeping the resource-rich continent of Africa and its people firmly stuck in the third world, according to Nigel Westmaas, “led to a veritable revolution in the teaching of African history in the universities and schools in Africa, the Caribbean and North America.”

But whether or not Rodney’s treatise changed the teaching of African history, it seems to have had little or no impact upon the policies of the West. No knowledgeable person believes that the Africa described by Rodney ended when the African nations gained independence in the sixties, nor that, today, Western countries are not still using Africa for their own ends while giving no heed to the welfare of the African people.

Both Europe and the US stood by during virtually all of the major  non-Arab country African upheavals of the last two decades, including genocidal wars in Rwanda, Darfur  and Congo at the same time that NATO was  intervening directly in Kosovo and Libya to protect civilians from slaughter. After suffering catastrophic losses following  relief operations in Somalia in 1993, the US appeared to take a hands-off policy with regard to African conflicts.

With new fundamentalist Islamic threats in Mali as well as other areas of sub-saharan Africa, the US is now beefing up its military presence on the continent, with advisors and technology such as drone aircraft. France has troops on the ground in Mali. The US  instituted an official Africa Command, AFRICOM, one of six Commands worldwide, in 2007. Recently it was revealed that the US will station drone aircraft in Niger and already has several other drone bases up and running in Africa.

What this means is that the American foreign policy toward Africa, having been weak on the economic and humanitarian sides in the first place, is becoming increasingly a military policy because of the threat of Al Queda—a development that failed to materialize in response to millions of Africans being killed in genocidal wars only a few years ago. What is the difference? When Africans were killing Africans the outcome was deemed not vital to US interests. But Al Queda poses a different threat. Unlike the various factions involved in the genocidal wars of the past, Al Queda is hostile to the West and to the US in particular. If they overthrew some of the African governments we would lose our access to vital resources.

In a recent interview, Emira Woods, co-director of Foreign Policy at the Focus Think Tank in Washington, pointed this out. Speaking of the militarization of US African policy she said, “It coincides with Africa increasing its significance as a supplier of oil to the United States. AFRICOM stood up in October 2008 just as Africa was actually reaching … 25 percent of the oil input that comes to the US from [abroad] so Africa was increasing its significance not only for oil [and] natural gas but for other vital resources so we have seen the steady increase in the militarization.”

It is a shame that our policies toward Africa are still being dictated by the same underlying agenda that prevailed during the colonial era. The African continent has contributed both natural and human resources to fuel the economies of the West for centuries. The US record with regard to using slave labor before the Civil War and with giant American corporations such as Firestone, which used Liberian rubber, Unilever, which made soap and other products from African raw materials, and Shell, which extracted oil from the colonial period until now, should make us, as a country, particularly eager to help struggling African nations develop, not just become military bases to protect our interests.

We have a President who is half African, whose father was born and lived in Kenya. Yet, Barack Obama has shown no inclination to pursue a new or enlightened policy toward Africa and, in fact, appears committed to heading down the road of further militarization of our African policy. Instead, he should be reexamining that policy to make it benefit Africans, not just America.

The Editor




French Thought?


OK. I’m all geared up for my trip to Europe. I envision myself sitting by the sea in Barcelona, eating tapas and drinking wine, maybe winding through the labyrinthine streets of Lisbon, or stumbling over the crumbling ruins of Rome, perhaps picking my way along the cobblestone streets of Salzburg. But most of all I see myself sitting at a sidewalk table at Les Deux Magots on Paris’ Left Bank and soaking in the ambience of European consciousness and particularly that which is most the European of all, French thought.

To be truthful, my visions of Paris are of Hemingway writing with a pad and pencil in a café, a beer in front of him and critical eyes directed toward the passing pedestrians, hoping not to be interrupted by one of the silly American or English expatriates who only want to gossip. Not that Papa was above a little gossip himself. At other times, I imagine Sartre, drinking coffee or wine, reading and taking notes, hopeful that an adoring young Sorbonne student will recognize him and distract him from his heavy intellectual exercises and burdensome sense of responsibility, then bending over his tablet and writing a trenchant line or two, directing the rest of us to be free but not free of guilt.

So in preparation for my vigils at the sidewalk cafes of St-Germain-des-Prés, I decide I’d better bone up on the latest French thought, which means first refreshing myself on early twentieth century French philosophy and then tackling that which in the last twenty or thirty years has become all the rage. The French debate this stuff the way Americans talk about sports, don’t they? How can I hold up my head at Café de Flore and be ignorant of Foucault, Derrida, and Lyotard and now Marion, Nancy and Laruelle?

I’m no philosopher but I’ve read my share of Wittgenstein, Ryle, Searle, Fodor and Dennett, not to mention, Anscombe and Foot. As a self-styled empiricist, I cut my teeth on analytic philosophy and loved its close ties to science. Being something of a cognitive scientist myself, I could see the value of philosophy in clarifying my ideas and provoking the pertinent questions. And guys like Fodor and Dennett have a breezy sense of humor, making reading them a pleasure.

But what’s going on with the French? Other than Foucault and, refreshingly, Malabou, their philosophy doesn’t seem to refer to anything. It curls back upon itself—analyzes its own language and that of its own incestuous practitioners. Did positivism stop at the shores of France?  Wasn’t Comte a Frenchman? Why do they still criticize ideas I thought no one took seriously (e.g. classical Marx and Freud) and have an obsession with the transcendent and the ideal (spending thousands of words dismissing what I would have thought they never should have considered in the first place)? And why couldn’t the Ordinary Language movement have made at least some inroads into the French intellectual culture, at least to the extent that it would deal with language as it is used to communicate? They only harp on language as form, on reality as language. They only talk to each other. At least they have a social and political conscience, or appear to.

I blame it all on phenomenology.

The Editor






The Wild West


When will Americans finally become fed up with mass killings? In recent years we have had the Virginia Tech murders, the Gabby Gifford shootings in Arizona, the Batman movie killings in Denver and the shootings at the Sikh temple in Wisconsin.  In addition to these well-known incidents, there have been, in the last ten years, 18 other mass shootings yielding a death toll of 126 people in the United States. The perpetrators of the incidents at Virginia Tech, Arizona and Denver clearly had mental problems. No doubt many of the other gunmen did also. Curiously, these events have raised more of a cry for improved treatment of mental illness than for increased gun control.


The US has no more mental illness than the rest of the world, but it does have more guns per person than any other country in the world.* However, opponents of gun control have argued that mass killings occur nearly as often in Europe as in the United States and cite several school shootings in Germany and the tragedy in Norway in which a lone gunman (who was declared  sane) killed 77 people as evidence that gun control does not reduce killing. However, these European incidents merely show that strict gun control does not eliminate mass killings. Isolated incidents, either in Europe or in the United States do not give an accurate picture of gun-related deaths. The U.S. still leads all European countries, except Estonia, in gun related homicide rates and is 28th in the world behind  a number of South American, Caribbean and African countries in this category.** (The oft-cited statistic that the US has the highest rate of gun-related deaths in the world is simply inaccurate. However the rate of gun related deaths and gun related homicides in the US is the highest of any wealthy, developed country).


Would stricter gun control laws reduce the incidence of gun-related deaths in the U.S.? The data bearing upon this issue are suggestive but not conclusive. Across all fifty of the United States there is a significant  inverse relationship between whether a state has restrictive gun laws such as a ban on assault weapons, requirements for trigger locks and safe storage of firearms laws and rates of gun-related deaths.*** There is also a robust positive relationship across states in gun ownership and gun-related homicide rates.**** A well-known study found that jurisdictions with “right to carry” concealed weapons laws had lower crime rates than other jurisdictions. ***** However other large-scale statistical analyses comparing US counties that had passed laws allowing the carrying of  concealed weapons and those that hadn’t, failed to show any effect of such laws on gun-related homicides or crime rates in general.****


All of the above data are correlational. Higher education of its citizens, liberal orientation of its electorate and higher wealth (as well as higher percent of immigrants!) are all positively correlated with low gun-related death rates in the United States and it may be that those characteristics are what lead voters to pass stricter gun control laws, while the laws themselves have no direct causal effect upon death rates.  Violent crime rates, both across the United States and in many of the states which passed right to carry laws were decreasing when right-to-carry laws were passed and further decreases may have just mirrored a national trend.


So if we can’t say anything definitive about the causal influence of stricter gun control on gun-relate death rates, what can we say about the need for stricter gun control? More importantly, what suggestions can we make for reducing the rate of gun-related deaths in the United States? Well, people can’t use guns to kill people if they don’t have access to guns, so restricting access to guns ought to reduce gun-related deaths. The problem is how to do it. Mexico is an example of a country in which strict gun sale laws exist alongside outrageous rates of gun ownership (mostly via weapons smuggled in from the US). So strict gun sale laws need to exist, but alongside strict enforcement of those laws. This would include both a total ban on ownership of assault weapons and (as many countries have in place) a demonstration of a need to own a gun that does not include self-defense, as well as strict background checks on those who buy guns for legitimate reasons such as target practice or hunting.


Probably as much as by gun proliferation, the US is plagued by a culture of violence, which celebrates violence as a method of solving problems. We equate our country’s power and prestige with the strength of its armed forces, rather than with its intellectual or creative achievements.  Our media is saturated with images of violence in films, books and music.  When this veneration of violence is combined with the highest rate of gun possession in the world and a stubborn resistance to curb gun possession because of a belief that it is one of our inalienable rights, we find ourselves exactly where we are today—the most violent of the world’s well-developed countries.


The Editor






*****John R. Lott and David B. Mustard, 'Crime, Deterrence, and Right-to-Carry Concealed Handguns'. The Journal of Legal Studies, 26 (1997)



Commentary: In Memoriam - Lawrence Howard; Who Wrote Shakespeare's Plays?

In Memoriam

Lawrence Howard


I first met Lawrence Howard when he was a psychology post-doctoral intern with the county mental health department. I was a neuropsychologist teaching a course in testing to county employees and interns and he was a psychologist trained in cognitive science who had decided to become a clinician. Two things struck me about Dr.Howard at  that time: he was brilliant and knowledgeable and he was in a wheelchair, paraplegic from a childhood illness. Later, when I was forming a teaching service within the county mental health department, I hired Dr. Howard (I always called him “Howard” although others called him “Larry”) as a half-time teacher. The other half of his time was being spent as a professor of cognitive science at the University of California, Irvine.


Howard and I became both colleagues and friends. When he later moved into a clinical position for the county, working with children, I kept up sporadic contact with him, mostly through trading videos or having coffee together. He had a great love of film. Anyone who frequented the University Cinema across the street from UCI would have run into him often.


After he and I both retired from county service, I often ran into Howard at the coffee shop across from the university and he and I talked about many things, but mostly politics (he was a liberal who raved even more than I did and who had edited a book on terrorism) and films.  I convinced him to review films for Lost Coast Review and previous issues have been fortunate to include a few of his reviews. When I last had coffee with him, Howard promised me another review for this issue. Then he failed to show up at the coffee shop for a couple of weeks and didn’t answer his phone. I found out through an announcement from the UCI Disability Services Department, where he had taken part-time employment, that he had died.


Howard taught many university students, he mentored interns in our county internship program in psychology and he counseled many clients. He was a gentle, intelligent, interested, caring human being and he will be missed by many whose lives he touched.

Casey Dorman


Who wrote Shakespeare’s Plays?

Shakespeare and DeVere

 Some of us wonder about the fate of D.B. Cooper. Some of us continue to question whether Lee Harvey Oswald or Sirhan Sirhan were really lone assassins.  But perhaps the greatest mystery of our time is the question of who wrote the plays and poems attributed to William Shakespeare. Unless informed by films such as Anonymous, most of the public may not even be aware of the controversy concerning Shakespeare’s identity. In a nutshell, the William Shakespeare (most often spelled Shakspere) who was born and died in Stratford-upon-Avon, never attended a university, may not even have attended grammar school, never claimed to have written any poems or plays, wrote no known letters, scrawled his signature as though he was either illiterate or debilitated, and left no books or copies of his own works when he died. Seizing upon these oddities as well as upon the lacuna of information about the man, several scholars and amateur sleuths have proposed that Shakespeare’s great works were actually the creation of someone else. The prime candidates have been the philosopher, Francis Bacon and the poet and courtier, Edward deVere, the 17th Earl of Oxford. Both of these men lived lives which included the learning, the travel and the talent to produce great works of literature.


The purpose of this discussion is not to take sides in the Shakespeare controversy.  I cannot deny that the debate is an interesting one, and I, purely out of curiosity, I have immersed myself in the arguments from both sides. What  impresses me about these debates has been the exaggerations, the factual omissions, the obfuscations, and the blatant distortions promulgated by both sides as they have made their arguments. Furthermore, the degree to which such misstatements enter the polemic in which any particular author engages, is not at all mitigated by the writer’s scholarly credentials.


In one of the most entertaining and controversial books on the subject: Charlton Ogburn’s (1984) The Mysterious William Shakespeare: The Myth and the Reality, the author refers to both sides as constituted of “true believers,” a label he disparages but which patently applies to himself as much as to anyone else to whom he refers. Although I have been entertained, when I began looking for some evidence to resolve the question my own reaction turned to dismay, for I was hoping to find someone who exercised skepticism when evaluating both sides of the question, someone who used scholarly restraint when invoking  words such as “probably,” “no doubt,” “without question,” or “most certainly” in his or her arguments rather than treating such tentative assertions as fact. But alas, nary an evenhanded treatment of the subject appears to have been written. Both sides argue from tenuous foundations, claim absolute certainty, and revile the assertions of their opponents with disdain bordering on insult.


Why is this controversy, which, anyway occupies the attention of a minority of thinking people, important? Not because of the position of Shakespeare within the canon of English literature; for as several authors have suggested, the body of work itself is surely monumental enough to stand on its own without such additional biographical information, although to finally know more about the playwright, or to find that, for instance, he was a constant collaborator when he wrote or that the plays were actually written by several different authors, would change our view of the character of human literary genius.  But to me, what makes the issue important is its ability to demonstrate the fallibility of human reasoning, even by some of our race’s finest minds, when those minds feel they have a stake in the outcome of the question. Although mainstream academia mostly rejects the identity question, a few academicians have deigned to participate, mostly to denigrate both the motives and the qualifications of those who have raised the issue despite the fact that those who have weighed in on either side of the debate represent men and women who have often devoted years, occasionally even lifetimes, to scholarly study of the topic, as professors, as actors, as critics or writers, and almost all, as devotees of Shakespeare’s works. Yet, the influence of such lofty exercise of one’s mind, such earnest devotion to scholarship and, in some cases, to the values of academia, which, one would assume, would include truth above almost everything else, appears to carry little weight,  as it routinely falls victim to bias and personal preference to an extent that ought to shake everyone’s faith in the wisdom of human expertise.


The entire enterprise of challenging or defending Shakespeare’s identity is yet another example of human reason being trumped by that most pernicious of ways of thinking we refer to as “belief.” If some of the world’s most brilliant minds cannot maintain objectivity on as presumably dry a subject as Shakespeare, how can we put any faith in the ratiocinations of our so-called experts on such things as global warming, terror threats, Iran’s nuclear ambitions, economic policy, or any of the other issues that so quickly become the substance of our foreign and domestic debates? Is it any wonder that we misjudged Iraq’s threat to us, or that the CEO of J.P. Morgan Chase made such a drastic error with regard to fiscal risk management, or that Europeans can’t figure out how to solve their debt crises, or that it’s nearly impossible to find out the truth about the advantages or the risks involved in adopting Obama’s health care reforms?


Is it really as difficult to be cognizant of the truth when defending one’s  position as it appears to be?  I have not given examples from cognitive psychology, which generally provides little evidence that people use logic or reason to come to decisions (except perhaps in solving puzzles or doing mathematics), especially with regard to real-world problems. But I am struck by how clearly the debate about Shakespeare’s identity gives one more example of how easy it is for us to abandon our adherence to truth and logic when it comes to something in which we “believe.” It is no easy task to be objective, to examine both sides of an issue equally, to weigh evidence rather than opinion. It flies in the face of human nature. But without doing so, our debates become silly and our conclusions at best, weak, and at worst, dangerous.


The Editor




Gearing Up for Battle


 In a recent New York Times article, Scott Shane discussed the length and cost of the American wars in Iraq and Afghanistan, concluding that, “The outcomes seem disappointing and uncertain.” Then he raised the question, “So why is there already a new whiff of gunpowder in the air?” A Pew Research Center poll in February found that 58% of Americans favored the use of military force to prevent Iran from developing nuclear weapons.


Shane’s article goes on to examine the inconclusive pronouncements of the IAEA regarding whether Iran’s nuclear ambitions are military, the similarly tentative stance taken by the American Director of National Intelligence, the cautionary words of both Barack Obama and the Chairman of the Joint Chiefs of Staff, the provocative words and behavior of Israel and the competition among the Republican Presidential candidates to be the toughest opponent of Iran and the friendliest toward Israel. His analysis conveys the current situation accurately, but it fails to explain the readiness of so many Americans to re-enter the world of international military conflict so soon after such dissatisfying battlegrounds as Iraq and Afghanistan.


As an observer of the American scene, the answer to Shane’s question is obvious to me: America all too easily and too often celebrates war and its warriors. President Obama welcomed the last troops home from Iraq with the words, “your service belongs to the ages; it was an extraordinary achievement." Come on. We outgunned a third-rate army and fought insurgents to a standoff. No sporting event in America begins without homage to and often a prayer for the soldiers fighting the wars overseas. Curiously, we routinely invoke the name of a man who advised us to love our enemies and turn the other cheek, when we pray at these events. Many Americans regard our President’s, if not our nation’s, finest hour as the one in which a Navy Seal assault team shot and killed an unarmed man. Huh?


We are faced with the dilemma that, while most Americans now agree that neither the Iraq nor the Afghanistan war was vital to national security, the men who fought in those wars voluntarily risked or gave their lives out of a sincere belief that they were protecting all of us here at home. We want to honor and thank these people, these soldiers, for their service.  But in the process we make several mistakes, which, if left uncorrected, will lead us into yet another unnecessary and fruitless conflict.


We do not “owe our freedom” to these soldiers. Our freedom was never in jeopardy, at least not from Saddam Hussein or the Taliban. Each soldier’s personal decision to risk his or her own life in order to protect ours was both brave and noble; his or her engagement in warfare to carry out that decision was not.


It is not always patriotic to fight a war. It may be, but it may not be if the war is costly and unnecessary. Military actions by our own armed forces do not always (perhaps not even usually) promote freedom. As often as not they protect American interests - either economic, strategic or political - often at the expense of the freedom of those people who live in the countries in which we fight (e.g. Vietnam, Iraq, Afghanistan).


The American Armed forces are not invincible, despite claims that they are “the most powerful fighting force in the history of the world.” We lost in Vietnam, won in Grenada, sort of won in Iraq and fought to a draw in Afghanistan. All of our opponents have been second-rate or less powers. No American politician or media personality is going to admit this. They all will continue to compete at who can beat the drum for military glory the loudest, uniformly equating their support of our troops with love of country. And the common man will not question them.


We cannot promote irrationality when it comes to celebrating war and expect to make rational decisions about going to war. This is why we will go to war, unnecessarily, again.

The Editor





The Theory of relativity

Do neutrinos travel faster than the speed of light? Probably not, but the claim by a European scientific team at the European Organization for Nuclear Research, known as CERN, has generated not just skepticism but also fervor to attempt to replicate the finding by an independent site. Why the intense interest? The excitement is because the premise that nothing can travel faster than the speed of light is a cornerstone of Einstein’s special theory of relativity (remember    e=mc²?  The c is the speed of light).  If anything can travel faster than the speed of light, then one of the main premises of Einstein’s theory and the modern conception of the universe is incorrect.

Apparently neutrinos are notoriously difficult to measure (this measurement required a 454 mile long particle accelerator, which spanned two countries, to track something that traveled the entire distance at 60 nanoseconds faster than the speed of light). Most scientists regard the result as a measurement error, but that will not stop them from conducting further experiments in case they are wrong. Healthy skepticism following unexpected results is part of science (remember Thomas Kuhn?), but so is the urge to experiment further to test the limits of science. More importantly, the need to revise theories in the face of new evidence is what sets science apart from religion, faith, belief and old wives’ tales, which govern most of human behavior.

“It’s just a theory,” we say about evolution, global warming, the Big Bang, etc. and half of us dismiss these theories’ claims as a result of that statement and half of us deny their theoretical status and claim them as “fact,” or  “settled science,” or what “science has found.”  Certainty is a matter of predictability within accepted limits of error and a matter of consensus by either our peers or those with the qualifications to know.  Certainty is not independent of the minds of the knowers or observers.  “Facts,” are things of which we are certain, so is what “science has found.” It can all be overturned by new evidence. Then we will be certain of something else.

We must live our lives based upon our knowledge of the world, including the physical, social, political and economic “facts” available to us. These facts will change, either within our lifetimes or in the future. All facts do. Everything we base our decisions upon now will turn out to be different or wrong sometime in the future. But we must still make decisions.  Each of us can decide to base these decisions upon what the “best evidence” of modern science has taught us is most likely to be the case (e.g. rationalists), or upon what our belief systems  tell us is true, independent of any scientific evidence (e.g. religionists), or upon our value systems taught to us by our culture and to which we subscribe through personal commitment (e.g. patriots), or upon our personal proclivities without thought beyond our immediate satisfaction (e.g. shoppers), or upon just thinking and doing what everyone else does (e.g. sheep).

Most of us base our decisions upon each of these different paradigms at different times and in different situations. Is one basis for decision making better than another?  Yes, I think so. I prefer the scientific, rationalist one. But probably no single basis  for making decisions is the most useful one in all situations.  In my opinion it is best to be acutely aware of the relativity of the theory (belief, religion, preference) you are using and its tenuous claim to certainty. Then, with such awareness in mind, make your choices and then live with the anxiety of knowing that the basis you used for making your decision was in some sense, arbitrary, in many ways, fallible, and in the long run, just another choice you made. If you want know how well this works, see Noel Mawer’s review of the fiction of Camus in this issue of this review. Camus got the idea.


How the United States Celebrates World Philosophy Day

            World Philosophy Day 2011 will take place on November 17 at UNESCO Headquarters in Paris. During this event, which has occurred annually since 2002, UNESCO will provide philosophers, researchers, teachers and students as well as the general public with a wide variety of conferences on various subjects, such as the equitable sharing of scientific benefits, philosophical meanings of the political upheaval in the Arab world, the role and the place of women philosophers in the exercise of thinking, philosophical practices with children, philosophy and equal opportunities at school.

            Why is philosophy important to UNESCO?

            The first clause of the UNESCO Charter’s Preamble states, “Since wars begin in the minds of men, it is in the minds of men that the defenses of peace must be constructed.” According to their own statements, “For UNESCO, philosophy provides the conceptual bases of principles and values on which world peace depends: democracy, human rights, justice, and equality. Philosophy helps consolidate these authentic foundations of peaceful coexistence.”

            And now, the United States, which provides 22% of the financial support for UNESCO, has withdrawn funding for the organization. The reason is because UNESCO voted unanimously to admit Palestine into its membership.

            U.S. State Department spokeswoman Victoria Nuland stated, "Today's vote by the member states of UNESCO to admit Palestine as member is regrettable, premature and undermines our shared goal of a comprehensive just and lasting peace in the Middle East. The United States will refrain from making contributions to UNESCO."

            White House spokesman Jay Carney said, "Today's vote distracts us from our shared goal of direct negotiations that results in a secure Israel and an independent Palestine living side by side in peace and security."

            In February of this year the United Nations Security Council voted on a resolution that condemned all Israeli settlements in occupied Palestinian territory as illegal, called for Israel and Palestine to follow the Road Map for Peace plan, and for both parties to continue negotiations to end the Israeli-Palestinian conflict. Over 120 U.N. member states supported the resolution.

            Despite votes in favor of the resolution by all the other 14 members of the Security Council, the U.S. vetoed the resolution. US Ambassador Susan E. Rice, while defending the U.S. veto, admitted that, "We reject in the strongest terms the legitimacy of continued Israeli settlement activity.... Continued settlement activity violates Israel’s international commitments, devastates trust between the parties, and threatens the prospects for peace.…"

            The U.S. has now opposed Palestine’s admission to the U.N. as well as to UNESCO. This last action by the U.S. is the most devastating because it is accompanied by withdrawal of financial support for the UN cultural and scientific agency.

            As the Jordanian Parliament stated in response to the U.S. announcement, “The US decision ..., was taken to punish UNESCO for the member states’ democratic and just voting to grant Palestine what it deserves. Washington’s move is strange because the United States tries to convince the world that it is a protector of democracy and freedom.”

            The actions of the United States in these matters are not the actions of a country that claims to be a defender of freedom   and a champion of the democratic process. The U.S. withdrawal of financial support for UNESCO is the action of a petulant bully that blindly follows a policy of fearing to alienate its strongest Middle East ally, Israel (which deserves American support, but not in this way)  and fears to anger Israel supporters here at home even when it admits that Israel’s actions are wrong. The U.S. pronouncements on these issues are hypocritical and disingenuous and it is time for America to place truth and morality ahead of political and strategic calculations.

The Editor



The Death of Osama Bin Laden

Like most Americans, I feel relief knowing that Osama Bin Laden is dead because I think that this will mean that the long term risk from terrorism is lessened, since one group of terrorists, al Queda, has lost its symbolic leader. But the fact that we ended up having to kill him in a military operation means to me that we have failed to try to communicate with him or his group or understand them and have chosen to meet violence with violence. Despite the relief we may have felt on hearing the news of his death, his death was not something to celebrate. The West has dismissed the beliefs of Islamic Fundamentalists as evil or crazy and not to be tolerated by a civilized society. We have not asked ourselves why some people believe as they do and have completely ignored Western actions such as placing military bases in their holy lands, backing cruel dictators who suppressed their own people but supported our policies, invading Muslim countries for misguided reasons, killing of countless Muslims either as enemy combatants or as "collateral damage" and the great impact of Western culture upon the value system and behavior of their young people, all of which might seem legitimate reasons to them to want to attack us. I deplore celebrating anyone's death and especially a deliberate, violent death and I regard the manner of such as killing as a failure of us to look for the common humanity in others and to seek a peaceful, civil way of solving our differences.

It is time that we Americans became more honest in our assessment of world events instead of seeing them only from our own perspective. Pakistan has been critical of the U.S. for violating its sovereignty in the intrusion into their air space and the attack on their soil that resulted in Bin Laden's death. American reaction has been to justify these actions because of the magnitude of the target and the role that ridding ourselves of al Queda's leader plays in our own self defense. These certainly are legitimate factors, but we also need to take seriously the complaints against our actions made by Pakistan. If the shoe were on the other foot and the Pakistani's identified an enemy of theirs in the United States, would we stand for them sending in an armed team to kill that enemy on our soil? Would we allow any other country in the world to violate American air space with the intent of mounting a military attack? Of course we wouldn't, but we justify doing those things because we are America and we were wronged by al Queda and Bin Laden - and we know that we can brush aside Pakistan's objections.

In Libya NATO is present to fulfill a United Nations resolution to protect civilians. Our news channels on television have reported on the attacks of Ghaddafi's military forces on civilian targets, while at the same time showing pictures of so-called civilian rebels with tanks, artillery and often in uniforms (ironically, NATO apparently believed its own rhetoric. They accidentally bombed rebel tanks because, as they said, "We didn't know the civilian rebels had tanks."). Both our news channels and our government have cheered the rebels on as they have  tried to mount counteroffensives to take Libyan cities and NATO air power has offered air support for such operations. In the guise of protecting civilians, NATO struck Ghaddafi's residence with a missile and killed his youngest son and three of his grandchildren. These actions may be necessary if we want the rebels to win the civil war and remove Ghaddafi from power, but let us not say that they are being done to protect civilians when they, in fact kill civilians, including children. Americans, including our President, are unanimous in saying that the Libyan leader must be removed, but shouldn't we remember that this is someone else's country and someone else's civil war and the United States cannot, in my mind, claim a moral superiority that allows us to willy-nilly meddle in other countries' affairs and remove their leaders because we don't like them. If we are going to continue to do this, and it appears as if we are, then let us at least be honest about what we are doing and not disguise our actions as moral when we really mean that we are just powerful enough to do what we want to do.


Is One False Story as Good as Another?

Is one false story as good as another? The obvious question is, ‘good for what?” I have argued vociferously in favor of verifiable ideas and against fanciful ones, particularly religious ideas. My argument has been that I don’t like believing in something that requires outright denial of what appear to be obvious facts and I don’t like believing in something that requires a suspension of either logic or critical judgment.

But aren’t all so-called facts only true from a particular perspective and don’t many of the beliefs of modern science derive from scientific and mathematical operations that are obscure and inaccessible to most of us? This has probably been true of most of the great discoveries of science for centuries.  Newton’s assertions about gravity were based upon a mathematics few people of his day could understand and were no more obviously true to the average person than the counter assertions about ether or elements that caused attractions between bodies. Copernicus’ heliocentric theory was also based upon complex mathematics (some of which turned out to be wrong) and was more counter intuitive than the common man’s observation that the earth was stationary and the sun and the stars revolved around it. Even Darwin’s theory of evolution was not a story about the origin of species that coincided with the observations of the average person and his version of natural selection was less obviously true than his contemporaries, such as Lamarck’s ideas about inheritance of acquired characteristics.

Even now, we can debate about the big bang theory of the origin of the universe and champion it over, say the continuous creation theory without, unless we are astrophysicists, understanding either the mathematics or the science behind either theory. Yet we are comfortable believing that one or the other theory is true and claiming that the biblical creation theory is clearly false, when, for most of us, the only evidence we have is the opinion of scientists.

Newton’s theory of gravitation was shown to be only approximately true for a certain range of phenomena when Einstein developed his theory of relativity and Einstein’s theory was then shown to apply satisfactorily only for another limited range of phenomena when Bohr and others developed quantum theory. Sigmund Freud turned the field of human personality and behavior on its head when he developed his psychoanalytic theory of unconscious motivation. Nowadays, Freudian theory is generally taken to be an outmoded, unnecessarily anthropomorphic story about how thoughts and behavior emerge from a person and has been replaced by neuroscience theories of the functioning of systems of brain cells.

It is difficult not to believe that nearly everything we “know” now, in the early decades of the twenty-first century will turn out to be either wrong or limited in application or perspective sometime in the future.  Does it really matter, then whether we believe in “scientific” explanations of ourselves and the world we live in or “religious” ones?

 Most religious beliefs are based upon premises that are difficult to prove, scientifically, at least in the world in which we now live. The assumption that there is an all-knowing consciousness either directing, observing or judging everything in the universe is not something that can be proven or disproven. That some aspect of our own consciousness persists after we die is another unprovable and undisprovable belief. Even the personal feeling that a particular person has of being “in touch with,” “possessed by,”  or  “at one with” a supreme being, may be the kind of culturally generated quality of one’s consciousness that, for someone within the culture that generated such a feeling, is impossible to doubt or to examine objectively, since one cannot stand outside of one’s own culturally determined personality to examine oneself and a second party’s observations about such a phenomena cannot overrule what one experiences first-hand.

Karen Armstrong’s remarkable book, The Case for God, makes it clear that there have been times and places in history when, at least in the view of some leading thinkers, religion and science were not at odds. St. Augustine, for instance, favored a “principle of accommodation,” which asserted that God presented his revelations to humans in the language that fit their understanding at the time. Thus, according to St. Augustine, Biblical stories were not literal descriptions of objective events that could present a challenge to scientific explanations of those same events and new scientific discoveries and explanations of nature presented no threat to religion. Certainly Copernicus, who was highly religious and, in fact, presented his ideas in more or less religious terms, subscribed to this view. To Copernicus, the beauty and symmetry of a heliocentric system of the sun and the planets was a testament to God’s wisdom and glory, not a challenge to it.

Galileo did not share Copernicus’ religious view of the heliocentric theory. In fact, he saw no conflict between science and religion in general or between scientific verification of Copernicus’ theory, which he thought he had provided, and Biblical stories. Galileo believed and argued as such, that much of the contents of the Bible was poetic expressions of religious truth, not to be taken as fact in the way that scientific evidence provided fact. As Karen Armstrong points out, such a view was espoused in St. Augustine’s principle of accommodation and was acceptable to the Catholic Church until around the 16th century, but after that, the Church became more dogmatic and pushed for a literal interpretation of all of the words of the Bible as well as accepted assertions of the Church that were based upon Aristotelian philosophy.

Even today, with “Creationism,” and fundamentalist religion gaining greater favor, we see religious narratives being portrayed as plausible alternatives to scientific narratives (e.g. evolution), rather than as two different ways of expressing ideas, a scientific and a poetical one, that cannot be in conflict, since they do not belong to the same category of explanation. Taking the words and stories of the Bible literally, would be like taking Christina Rossetti’s poetic words, “My heart is like a singing bird,” literally and being forced to disavow centuries of discoveries about human anatomy. Science provides one kind of narrative about the world and human existence while religion provides another, but they are not equivalent nor should they compete with one another.

Scientific theories, discoveries and explanations do not provide a guide to behavior because, even if they can predict the outcomes of different actions, they do not include values that would lead to favoring one outcome of an action over another. Religious beliefs do include values, so one could base one’s decisions about how to act upon his or her religious beliefs, but not upon one’s scientific beliefs.     

 Because religious beliefs express values, these values may differ between different religions. Hindu’s place a value on not injuring other creatures and generally refrain from killing animals for their meat, as do monks and some followers of Mahayana Buddhism, for similar reasons. Monks of the Theravada sect of Buddhism, do consume meat, but do not kill animals themselves to provide the meat. Christians, Jews and Muslims have no prohibition against killing animals and the latter two religions have a history of animal sacrifice.

The values inherent in a religion are not always easy to discern nor do they always determine behavior. The inclusion of a ‘golden rule’ type dictum to behave toward others as you would want them to behave toward you, is often cited as occurring in nearly all of the world’s religions. However, this is a dictum not often followed by members of most religions and even by the religions’ leaders. Many religions, including Buddhism, Hinduism, Christianity and Judaism prohibit killing of human beings, yet followers of these religions have waged wars, mounted terror campaigns and may even have sanctioned legalized killing of criminals, “witches,” “blasphemers,” and members of other religions, often in the name of their own religion.

So does the story one tells oneself matter? Most likely it does and if what matters are the values that one tries to use to guide one’s behavior, then religion is a potent source of such values and may offer a ready narrative about the person, his or her relationship to others and to a supposed god. I have often wondered whether, when one kills in the name of religion or when one ministers to others in the name of religion, the religion is the cause of such behavior or if it is just an after-the-fact rationalization for the behavior. The answer is probably that it may be either one.

If people do not have religious beliefs, what is their source for the values to guide their behavior? Certainly philosophy has offered candidates that rival those put forth by any religions. A good example would be Kant’s ‘Categorical Imperative’ to "Act only according to that maxim whereby you can, at the same time, will that it should become a universal law." In other words, act in such a way that you would desire that all people acted similarly. Because such a rule still leaves room for the person who, say desires that all people seek to kill members of a certain race or religion, Kant added a second and a third imperative such that one should, “act in such a way that you treat humanity, whether in your own person or in the person of any other, never merely as a means to an end, but always at the same time as an end,” and, “therefore, every rational being must so act as if he were through his maxim always a legislating member in the universal kingdom of ends.” These three imperatives, taken together insure fair and humane treatment to any and everyone.

The work of Franz de Waal with chimpanzees demonstrates that some of what we refer to as moral behavior is also evident in primates and may be the basis for human morality. Altruistic behavior, at least directed toward members of one’s own family or colony is not uncommon in chimps, bonobos and capuchin monkeys and involves some capacity for empathy. Such behavior appears to be tied to the emotions in these primates. Now primates do not have external language of any degree of sophistication and presumably also lack an internal dialogue to guide their behavior. Yet they show some elements of the types of behaviors that we associate with being guided by a philosophical or religious stance. Does this mean that human philosophic-religious narratives are rationalizations for ways of behaving that are genetically wired into us when we are born?

In his book, Primates and Philosophers: how morality evolved, De Waal and his commentators agree that primate “moral” behavior is mostly limited to the circle of family and colony members and does not include a “universal” notion that includes all members of the species or all living things, which is something many human philosophies and religions do include. Of course, despite such inclusions, religions particularly have provided a source of antipathy toward other religions, nations and races and a marked lack of empathy toward one’s enemies, which is usually described as seeing people who are different from oneself as “less than human.” There is often marked discontinuity between the universal morality espoused in the writings of a religion and the behavior of the people who espouse that religion.

The religious or philosophical narratives we develop and try to live by may have their source in behavioral and emotional tendencies that are part of the genetic endowment of our species. In this sense, they are a sort of grand, after-the-fact rationalization for how we find ourselves feeling and behaving. It may be a limitation of that genetic endowment that such moral tendencies are directed mostly toward those we see as similar to ourselves. We can only overcome this tendency by creating narratives that are more inclusive and universal. So far, in my opinion, the world’s religions, as well as the national identities that have been developed have not been successful in extending the tendency to act morally to people’s interactions with people whom they see as unlike themselves. In some cases both religious and national narratives have fostered suspicion and hatred toward those who do not share one’s religion or nationality. It is time to assess whether such insularity is endemic to religion and national identity and if so, to develop narratives based upon different premises.                                                                     






Which comes first, consciousness or behavior?

Philosophers from Paul Ricoeur to Daniel Dennett to Owen Flanagan, to name only a few, have posited that the sense we have of ourselves is constructed in narrative form, that is, in the from a story, with ourselves as the central character. We are, simultaneously, actor, story teller and listener to this story. As actor, we are aware of the previously occurring events and plot and have a general idea of where we believe the story is going, but at any particular moment we may be reacting to events that had not been considered or had been poorly estimated and both the events and our reactions to them only become incorporated into the story after-the-fact.

To me, the nagging question has always been how much of the coherent, story-like construction of our experiences occurs after the fact? I don’t find it difficult to conjecture that our normal way of operating is to construct our conscious experience and thus our narrative construction of events after we have already acted. Benjamin Libet’s research in which conscious decision making actually followed evidence of the brain’s initiation of actions, gives credence to this conjecture.  To the extent that our conscious experience follows our behavior, we can see it as a rationalization of our behavior, a putting the behavior (probably with some distortions of memory about the behavior) into our coherent narrative.

I can see two objections to the argument that consciousness always follows behavior. One, of course is the argument I have made before that we often do plan what we are going to do and then, in fact, follow that plan. When I get up in the morning and pick out clothing to wear today that is appropriate for the activities on my day’s schedule, I am making decisions based upon a narrative about my day’s activities and the appearance I want to present, which in turn is based upon my narrative about who I am, what people will think when they see me, what events will impinge upon me, etc. It is less plausible that I choose clothing and make up the story of why I did so after the fact.

The second objection is that the argument that consciousness follows behavior and is a reaction to it is based upon an assumption that behavior (and the brain related events that support it) is an event and conscious experience is another event, with one causing the other. Actions, however, as Aristotle pointed out, are meaningful behaviors.  Brain events can cause other physical events but they cannot, by themselves, result in a meaningful action. That action achieves meaning as a consequence of it being situated within a narrative. The narrative itself may precede, accompany or follow the behavioral component of the action. My flinging my arm out and knocking a lamp from a table as a result of my having been startled by the sound of gunfire behind my back is a different action than me doing the same thing as a display of my anger. But the two actions are not just different because of the after-the-fact interpretations I give to them; they are different as they are occurring. The behavioral sequences involved in the actions are orchestrated differently because of the conscious elements that precede and accompany each action. Thus behaviors themselves have a narrative quality to them, which occurs both prior to and  at the same time as the behaviors are occurring and may determine the quality of the behavior (do I hit the lamp with enough force to break it or do I lessen the blow so that it shows my anger but causes less destruction?).

The answer to the question of whether narrative consciousness precedes, and is a causal factor in behavior or whether its causal role is only an illusion (in fact a story we tell ourselves) and the narrative consciousness is always after-the-fact may be unanswerable. Both situations appear to happen at different times