Saturday
Aug162014

E-Books, Print, and Prices

E-books, Print and Prices

Editor

 

Readers, writers, even publishers, especially Indie publishers (which in today’s world is often a cover word for self-publishers who have formed their own publishing imprints), are mystified by the ongoing battle between Amazon and Hachette—either mystified or consumed by strong opinion. There seem to be two real issues: first, the question of who can determine the price of e-books, the publisher or the retailer, second, the type of tactics that it is permissible for a retailer to use to bring a publisher into line.

Hachette wants to charge more for e-books from its most prominent authors. Amazon wants to limit e-book prices, making the range of prices smaller and lower. The two companies have to negotiate an agreement and when Hachette has refused to give in to Amazon’s demands, the latter company made Hachette’s titles harder to order on Amazon’s website, either delaying shipment, prohibiting pre-publication orders, or occasionally removing “buy” buttons from titles.

Amazon is able to sell e-books at a loss in order to build up its customer base and compete with other sellers. Doing so cuts into profits for the publishers. Amazon’s argument is that by lowering prices it will sell more books and publishers will still make plenty of money.

Most Indie publishers and self-published writers are on Amazon’s side. For most self-published authors, the goal is to get their books in the hands of as many people as possible. Low prices, even free giveaways, are prime strategies for doing so. Large publishing companies rely on established names, whose books will sell because they have a waiting fan base, which is willing to pay higher prices for its favorite authors.

Without getting into other issues, such as the discounts publishers pay Amazon for advertising their books, or the long-term implications of having one retailer have a monopoly within the book industry, why is this controversy important to anyone other than the big publishers and their best-selling authors? They will make less money, but the bulk of writers and Indie publishers will be unaffected, won’t they?

Amazon owns the lion’s share of the book-selling market, both for print and e-books, somewhere around 65% in each case. Print books still outsell e-books but the numbers are deceiving and perhaps inaccurate (since most surveys don’t include Indie and self-published books), and e-books have surpassed print in the fiction category, especially genre fiction, where Indie and self-published authors now sell more books than authors published by the “Big Five” publishing companies. In fact, Indie and self-published authors actually make more money, not just because there are more of them, all making small amounts, but in nearly all income categories, even the $100,000-$250,000/year range, there are more Indie authors than Big Five authors.

There are lots of platforms available for self-publishing or even real Indie publishing using print-on-demand format and even more for e-book publishing. Amazon’s Kindle and its website have revolutionized sales and its Createspace platform has allowed countless authors to enter the market. Authors who have gone through the agonizing and painful process of being rejected by literally hundreds of agents and publishing houses are rightfully thankful to Amazon for allowing them to finally emerge into print. That is overwhelmingly why such authors are on Amazon’s side in this controversy.

But will lowering the price of best-selling authors help Indie and self-published authors? Probably not. To some extent the success of such authors has been because they have underpriced their works compared to those published by the Big Five. Lowering the price of the latter authors’ works will make them more competitive with Indie and self-published authors’ books.

As  readers, we can all hope that book prices are lowered. Lowering e-book prices further than they now are will perhaps hasten the end of print publishing, at least for fiction titles, the print prices for which are now out of reach for many readers. From most current readers’ and authors’ points of view, this would not be good, but who knows what a younger generation of readers, raised on digital media only, will want or care about? Bookstore owners may soon be in the antique business.

Authors don’t lose when readers choose e-format over print. E-book royalties can be, and usually are higher than print royalties, and sales numbers can be larger.

So for all but a few, the lowering of e-book prices is a good thing.  Those of us who love print books and roaming through book stores and libraries filled with real books, can bemoan the ascendance of the e-book, but like luxury cars, fine wines, craft beers, and movie theaters (and now even drive-in theaters), something that is neither cheap nor functional can survive if there is a market for it in our society.  And as in each of these examples, prices that began high, gradually lowered, as the audience base for them grew.

Even if lower e-book prices are a good thing, there is no reason to force the Big Five to lower the prices for their best-selling authors. Market trends are going to force them to do that anyway. And draconian tactics to force the Big Five to kowtow to Amazon’s demands, especially when such tactics penalize both authors and readers, are shameful and help no one.

 

Sunday
May182014

A Distant View of the Middle East

A Distant View of the Middle East

Editor

 

Does the current chaos in the Middle East (Syria, Lebanon, Iraq, Egypt, Libya) represent the waning of American influence in the world, or perhaps failed opportunities by a U.S. administration that has refused to take a leadership role in conflicts beyond its borders, or simply era-specific developments unique to a part of the world that is struggling with the transition from autocratic to democratic rule? Or perhaps it represents none of the above—or perhaps a little of each? Certainly things are confusing to us in that part of the world. The civil war in Syria may be the most confusing. Bashar al-Assad, a leader whose religious affiliation is with the Alawite sect, an offshoot of Shiite Muslims, is a tyrant who uses his military to rein in anyone in his country who challenges his policies. His opponents are mostly Sunni Muslims, who appear to have begun as protesters and freedom fighters but many of whom  have now become champions of fundamentalist Islam as an alternative to the current regime. They are supported by Saudi Arabia and are allied with al Qaeda, both within  and without the country and particularly across the border in Iraq. Some of them want to establish an independent fundamentalist Islamic state occupying parts of both countries. The most prominent of these, the Islamic State in Iraq and Syria, or ISIS, apparently uses torture and execution techniques on civilians that are no different than the worst used by the Assad regime, which it fights.

Within Iraq, the Shiite leadership of Nouri al-Maliki has limited the voice of  minority Sunni Muslims to the point that Sunni-oriented al Qaeda in Iraq has resurfaced and taken over part of the country. The U.S. is faced with the quandary of how much assistance to provide to the al-Maliki government to suppress the Sunni uprising, since the latter is largely fed by fundamentalist al-Queda influence.

For the last several years, the United States has labeled Shiite Iran as its principle threat in the Middle East. Terrorist activities have been laid at the doorstep of Hezbollah, an Iran-financed Lebanese group of Shiite Muslims, whose principle target has been Israel, but who are now fighting in Syria in support of the Assad government.

Americans, and particularly Americans who make foreign policy decisions, have not understood Middle Easterners. A recent study, which found that Tunisians were among the most liberal of the Middle Eastern citizenry, second only to Turks, showed that Tunisians viewed themselves as Muslims first and Tunisians second. They favored freedom of expression, but not if it included criticism of their religion (their new constitution, heralded as a model for the Middle East, includes the phrase that it is the state’s duty to “protect the sacred,” referring to Islam). At the same time, they were largely tolerant of other religions. They overwhelmingly favored arranged marriages and only about half felt that women, whose rights to hold office are now guaranteed in the constitution,  should be allowed to dress as they wanted. They valued American friendship, wanted American tourism, but were in favor of American defeat in Afghanistan, where they viewed the American effort as an attack on Sunni Muslims. They had stronger faith in their army than in their government. Most of their views were echoed, either strongly or weakly, throughout the Middle East.

Into what American political storyline do the above findings fit? The answer is none. Our views of the Middle East are too simplistic and culture-centric to encompass complexities that we don’t understand, but which make perfect sense to the inhabitants of Syria, Tunisia and Egypt. Americans tend to think in terms of being either for us or against us, of being in favor of free-speech or against it, of separating religion and politics, and of human rights which include equal treatment of everyone with no regard for gender, sexual orientation, religion, ethnicity, or social status. These are not the terms that determine Middle Eastern sympathies.

Fareed Zakaria, host of Fareed Zakaria, GPS on CNN, has recently argued that it has not been  Obama’s inactivity that has created the disasters of Syria and Iraq, but rather it was Bush’s activity. Bush virtually took over Iraq with no appreciation of sectarian divisions within the country. He rescinded the power of the Sunni minority through a policy of de-Baathification, without realizing that he was installing a tyrannical Shiite majority as its substitute and fueling fires of a Sunni uprising. In Syria, Obama has been accused by Senators McCain and Graham of having caused the al-Qaeda takeover of the revolution against Assad because we didn’t jump in and support the “friendly” insurgents soon enough. But recent New York Times reports of the Benghazi, Libya attack on the U.S Embassy indicate that it was the “friendly” insurgents against Gadhafi, whom we armed, who led the attack against our installation.

To say that the Middle Eastern situation is murky is an understatement. What should the U.S. do in such a case? The answer is simpler than it would appear to be.

First, don’t arm anybody. We have multiple instances of evidence from that of Afghanistan rebels, armed and trained by the CIA, morphing into al-Qaeda, to the latest exposé of what really happened in Benghazi, where the attackers used weapons that had been supplied to our rebel allies, to recent reports that Boko Haram, which kidnaps innocent schoolgirls, is also using weapons supplied to Libyan rebels, which support the idea that our own weapons almost invariably end up being used against us. Offering food, water and medicines is fine but adding more weapons to this conflict is a definite mistake.

Second, don’t impose Western values upon Middle Eastern cultures. As a country that separates church and state to the point of forbidding Christmas carols being sung on government property, we cannot begin to understand cultures that permit freedom of expression but not criticism of religion or who put more faith in their military’s ability to run their country than they do in elected officials. Go to war to protect women’s rights in a Muslim country? We don’t have a clue what rights are important to Middle Eastern women or how their thinking fits into their own country’s culture.

Third, admit we don’t understand the “Arab Spring” (now fall, winter, summer, and on and on) and what it means to those who are engaged in it. There is no opportunity to highjack it for our own ends—period. We don’t know what it is about and we’ll have to wait and see. We can pontificate about whether Arab countries are “ready” for democracy, but that’s mostly just Western (and basically White) prejudice; a haughty sense that the “dusky” races (or non-Christian countries), don’t really have the wherewithal to govern themselves democratically. Democracy can take many forms and ours in the U.S. is just one of them. Equitable governance can take many forms and we’re not even sure that we know how to achieve such a thing in the U.S. So what happens if we heed all of this advice and keep our distance?

 

We don’t know.

 

 

 

Thursday
Jan302014

Commentary

Income Inequality

Casey Dorman, Editor-in-Chief, Lost Coast Review

 

There seems no doubt that income inequality is increasing in the United States. Paradoxically, global income inequality is decreasing although it certainly persists, as pointed out recently by Pope Francis. But, as Charles Kenny reported in Bloomberg Business Week last December, “worldwide, consumption for the median inhabitant has increased about 80 percent [over the last ten year period], compared to around a 60 percent increase for the world’s highest-spending 1 percent.” So worldwide, incomes have become more, not less equal. This is primarily due to remarkable gains for the middle-class in China and India, where the economies have taken off in recent years, partly due to the consumption of the affluent nations, such as the U.S. who value these countries’ cheap products. Some have seen this as a validation for the “trickle-down” theory of economics, since the spending of the rich has profited the income of the poor.

Despite these worldwide gains, income disparity has increased in the U.S. and the U.S. has, as measured by either the Gini coefficient or Palma ratio, the largest gap between rich and poor of any highly developed country. Three questions arise from this fact: 1) Why is this? 2) Is it bad? 3) If it is bad, what can be done about it?

With regard to the reasons that income inequality has increased in the U.S., there are multiple reasons. A study by the National Bureau of Economic Research conducted in 2008 found that reasons for increasing income disparity included the failure of low-earning women’s salaries to rise—salaries which were more likely to be at minimum wage levels than were men’s. Other reasons were health issues and shorter life expectancy among the poor compared to the wealthy. Increased immigration of low wage earners lowered the wages of high-school dropouts and previous low-earning immigrants, but did not affect native-born high school graduates. At the top end of the wage scale, salaries of the top 10% of earners, and particularly the top 5%—CEOs and sports and media superstars and some professionals—have soared over the last 40 years, far outstripping the increases in salaries seen in the bottom 90%.

So in the United States the rich have gotten richer while those in the middle have stayed the same and those at the bottom have gotten relatively poorer. What’s wrong with that? If those at the bottom are still able to live comfortably, then maybe there is nothing wrong with it. But they are not. A forty-hour a week salary at $7.25/hr., the current federal minimum wage, equals $290 per week or $1305 per month before taxes are taken out. And many of the poor are not full-time employed. In Orange County, California, where I live, the average rent is $1671 per month. California has a minimum wage that, at $8.00/hr.,  is higher than the federal wage, but that still translates into only $1440/month, more than $200 less than the average rent. We have government programs to assist people at such low incomes, but, before the Affordable Care Act, such a salary would not have qualified a family of two for MediCaid; they would have received an earned income credit of only $140;  they would not have qualified for food stamps. These things would change if one of the two was a child, but then there would also be child-care expenses. For instance if one of the two was a child and the other was his or her mother, and the child had child-care expenses of $450 per month, the two of them would be eligible for approximately $320 in food stamp benefits per month (which may be reduced by $90 in the near future if Congress adds such a reduction to the Farm Bill) and the child, but not the mother, might qualify for some kind of health care or discounted health care (after the Affordable Care Act both mother and child would qualify for public health care). It is difficult to see how such a single mother and her child would survive without having to either share an apartment, live with a relative, etc., any of which would lower the eligibility for either earned income credits or food stamps.

So the plight of the poor in America is bad. What can be done about it? A higher minimum wage would help. California is scheduled to raise their minimum wage to $9.00/hr. this year and to $10.00/hr. next year. There are proposals in Congress to raise the federal minimum wage also. David Brooks has recently argued that increasing the minimum wage would only help a minority of families in poverty because the majority of minimum wage earners are third or fourth members of families who are above the poverty line. He cites one study  in support of his position, but other studies have challenged that finding. In fact, the Economic Policy Institute has recently shown that over half of  those earning a minimum wage come from families with a total income of less than $40,000 per year—200% of the federal poverty level for a family of 3—not poverty level, but certainly in the lower income range. Raising the income ceiling for food stamps would also help and providing MediCaid to a larger group with higher incomes will help. These are all government interventions. Republicans oppose such measures and prefer to reduce taxes on those who pay the most taxes, i.e. those who are wealthy, in the belief that they will spend more and invest more and grow the economy so that more people have better paying jobs.

A greater number of better paying jobs will help the middle class, who have the education and training to take advantage of them, although, middle class wages have not increased, relative to inflation, for the past forty years while high incomes have. But at the bottom of the work ladder, those low-paying, minimum wage jobs are surely not going to profit from the increasing wealth of the income elite. Many of the low-paying jobs in our country are part-time. They are part-time because employers don’t want to pay the benefits they would have to pay to full-time workers. Many fast-food  and retail chains are guilty of this practice, which in their belief increases their profits by decreasing their costs (Walmart has recently found that this doesn’t work so well and after a well-publicized reduction of workers to part-time status to avoid Affordable Care Act health insurance requirements, has brought 35,000 workers back to full-time status because of the inefficiency such a reduction caused in their store operations).

A minimum wage should be a livable wage, which at present it is not. Additionally, since life at the bottom of the wage scale is never going to be very good, we need change our society in such a way that those who are born into the bottom socioeconomic rung of society have a way to advance above that status. The answer to that is better education, affordable health care, and training for technical jobs that can be carried out by well-trained persons whose academic, as opposed to technical education need not be above a high school level. Charles Kenny has pointed out that some of the upward movement of the poor in third-world countries has been due to a focus upon education. “Primary school enrollments in Sub-Saharan Africa have risen from 70 percent in 1990 to 100 percent today. Secondary enrollment in the region has climbed from 22 percent to 41 percent over the past 21 years.” But the U.S education system is woefully inadequate, especially for those who live in poor communities where the tax base is weak and the cultural context makes going to school a dangerous and often seemingly pointless endeavor.

The working poor also need political power. A Cesar Chavez figure is needed to champion the low-paid fast-food and retail workers, someone with the charisma and legitimacy to lead a nationwide boycott or some such action that will bring the giants of the food and retail industry to the negotiating table to agree on higher, livable wages. In New York City, the new mayor, Bill de Blasio has called the wage practices of fast-food chains in the city “an unsupportable, situation” and has called for city action to require an increase in wages.

A newly published study by researchers at the University of California, Berkeley and the University of Illinois at Urbana-Champaign found that: “More than half (52 percent) of the families of front-line fast-food workers are enrolled in one or more public programs, compared to 25 percent of the workforce as a whole. The cost of public assistance to families of workers in the fast-food industry is nearly $7 billion per year. At an average of $3.9 billion per year, spending on Medicaid and the Children’s Health Insurance Program (CHIP) accounts for more than half of these costs. Due to low earnings, fast-food workers’ families also receive an annual average of $1.04 billion in food stamp benefits and $1.91 billion in Earned Income Tax Credit payments. People working in fast-food jobs are more likely to live in or near poverty. One in five families with a member holding a fast-food job has an income below the poverty line, and 43 percent have an income two times the federal poverty level or less. Even full-time hours are not enough to compensate for low wages. The families of more than half of the fast-food workers employed 40 or more hours per week are enrolled in public assistance programs.” As Forbes Magazine noted, all these things were true while the ten largest players in the fast-food industry “made a cumulative $7.4 billion in profits in 2012, paying out an additional $7.7 billion in dividends and buybacks to shareholders.”

The rich are getting richer, but the middle-class, the group to whom all politicians seem to speak, is doing OK. It is the poor in America who are falling further and further behind. There is no indication that wealthy or corporate America has any inclination or plan to fix this problem. The burden falls upon the rest of us to support government programs that increase wages and to end our support of greedy businesses that fail to pay their workers enough money to live on.

 

 

 

 

Thursday
Oct172013

Commentary

When Democracy Stops Working

 

The recent circus in Washington, D.C.  over the Continuing Budget Resolution and the Debt Ceiling has served to highlight the inability of our elected politicians, from the president down to the lowliest freshman congressman, to solve the country’s political problems.

 

I admit that the fight should never have happened. In the first place, Congress should have worked out budget agreements that would be suitable for both parties long ago. The blame goes to both parties. Democrats in the Senate failed to pass a  budget for four years and when they did, Republicans refused to continue funding the government as the budget expired, unless Obamacare was defunded or delayed. Talks that should have occurred over a new budget did not take place. Secondly, there should be no “debt ceiling.” We have to increase our debt limit so long as we are still servicing a large debt and have not balanced the federal budget. Other countries in similar situations, simply keep borrowing more money and debate over the budget, not over a ceiling on their debt limit (admittedly, some of our European friends have gotten themselves over their heads in debt).

 

OK, so the debates should never have occurred. But they did. And neither side has distinguished itself as either smart or noble in the show in Washington. Both sides have asserted their point and then refused to move beyond it prior to any discussion with the other side.

 

The result has been a total breakdown of governmental function. Politicians are supposed to be smart negotiators… aren’t they? Watch Steven Spielberg’s Lincoln and see how Honest Abe did it (how many of us hoped that Barack Obama would watch this movie and learn something). Make Lyndon Johnson your modern role model. Ask Bill Clinton how he dealt with Newt Gingrich. Or vice-versa.

 

We elect our representatives to work with each other and make our government function. If they do not do this, they are not putting any of our wishes into effect, unless we are anarchists and don’t want our government to function at all.

 

Again, we must remind ourselves that everyone in Washington is at fault here. No one has extended a hand to the other side. I can’t help but think that the way each side is approaching the issue of the budget in Washington is the way America is approaching its role in world politics—do whatever we want and refuse to talk about it with anyone else. This is an ominous sign of  American hubris. The lesson here is that if you take a stance, based upon absolute certainty that you are right and refuse to negotiate because of that certainty, you are risking becoming estranged from both the view of those to whom you, in theory, report (i.e. the citizens),  as well as to  reality.

 

Many of us feel certainty. But  because we are all human and subject to our native and environmentally produced biases, our certainty is no guarantee that we are right. We are a social species and despite Alexander the Great, Genghis Khan, and other great figures in history, real progress of our species and culture has been the product of social structures that supported wholesale cultural movement because of a shared view of the goals of our civilization, not the implementation of the narrow vision of a single person or a small minority. We need to develop such a shared view of what our American civilization is supposed to look like. Our politicians have an obligation both to lead us and to talk to each other in furthering such a shared view.

 

The Editor

 

 

Saturday
Aug032013

Commentary

The Creation of Culture

 

On This Week, with George Stephanopoulos conservative columnist George Will criticized the suggestion that state or federal help in investing in Detroit could pull the city out of its economic crisis, claiming that federal assistance, in his words, “[c]an’t solve the problems, because their problems are cultural.”

According to Will, “You have a city, 139 square miles, you can graze cattle in vast portions of it, dangerous herds of feral dogs roam in there. Three percent of fourth graders reading at the national math (sic) standards, 47 percent of Detroit residents are functionally illiterate, 79 percent of Detroit children are born to unmarried mothers. They don’t have a fiscal problem, they have a cultural collapse.”

Shrinking from a population of around 2 million in the mid-nineteen-sixties to barely over 700,000 currently, 83% of whom are African-American, Detroit has dropped from the United States’ fourth largest city to its eighteenth. Only two auto plants remain in the city. The median household income of $27,000 in Detroit is approximately half of that in the state of Michigan as a whole, which itself ranks 34th among the states. The jobless rate is close to 30% and the poverty rate is nearly 40%. A quarter of the population never graduated from high school and only 12% have college degrees. The police force of 2,700 officers is down from nearly 4,000 ten years ago. The  rate of violent crime  in Detroit is the highest in the country among large cities.

The arguments will continue as to whether Detroit represents the future of the American city, or at least the American industrial city (Flint, Pittsburgh, Buffalo, Toledo, Chicago, Milwaukee, Akron) of the old Rust Belt. At the least it represents a symptom of the woes of big American cities in which manufacturing has left—not just jobs, but the all-important taxes from the employers— and from which affluent residents, and their property taxes,  have fled to the suburbs.

George Will is seeking to affix blame for Detroit’s woes and he clearly blames its citizens. That seems to me to be blaming the victims. I’m not sure who is to blame for Detroit’s circumstances, although fingers have been pointed in enough directions (politicians, the auto industry, city workers and pensioners) that the fault is surely multi-determined. One thing is sure however: Detroit represents a cultural failure.

A recent article in Boston Review by Jess Row examined the focus of many white writers in America (Richard Ford being a prime example) on seeking a safe, suburban or exurban community where matters of identity, conscience, existence, etc. could be examined against a backdrop  of what for those writers is traditional (White) American culture. The plight or even existence of a Black or Hispanic urban population has been studiously ignored by these writers… as it has by the rest of White America.

The cultural problem we live with in this country is a division between rich and poor, which often coincides with a division between White and Black or Brown. Those who are not doing well in our society; those who are undereducated, un- or underemployed, those who are not safe in their neighborhoods, those who require government assistance to have enough to eat, are unrepresented and uncared for. Not only do the country’s intellectuals ignore them, but so do the politicians, who rant about the “middle-class” but ignore the truly poor, who preach the dire consequences of the country borrowing more money than it can easily repay but ignore those who haven’t got enough money to live, who proclaim “family values” but ignore social circumstances that destroy families, who serve the super-rich—the group that has increased its relative income by nearly 20% while the rest of America has remained stagnant or drifted backwards—and who are more concerned with abortion issues, the profits of health insurance companies, the defeat of fundamentalist Islam, than the fabric of American culture… all of America and all of its culture.

George Will is right that Detroit represents a failure of culture, but it is the culture of which he is a  part which has failed, not the culture of the victims of that failure. (For a glimpse of the culture which exists in the Detroit area—actually Flint, MI—read Randall Mawer’s review of Gordon Young’s  Teardown in this  issue of Lost Coast Review)

The Editor

 

 

 

 

Friday
May312013

Commentary

Mental Health Recovery and the Arts

By Richard Krzyzanowski

Most people will seize an opportunity to talk about themselves, as we tend to be our favorite subjects. Of course, we all tend to edit our material to some extent, highlighting the flattering and omitting that which we feel others may not approve of or understand.  Many of us – somewhere between one quarter to one fifth of us – are reluctant to open that door at all, however, because of where the process of disclosure might lead: We are people with a personal experience of mental health challenges, a subject so touchy that often not only do we avoid discussing our lives with others, but sometimes hesitate to even think about it ourselves.

This is the face of stigma, whether public stigma or its internalized brother, self-stigma, and the damage it can do is very real, from lost opportunities in society to lost self-esteem to the radical attempt to escape this painful situation entirely through suicide.

The most productive strategies used to fight stigma and its impact on lives involve using the truth to give perspective: We are not defined by our diagnoses or our illnesses; mental health issues touch most of us at some point in our lives, either directly or through someone we know and may care about; there is life after and with such challenges, which we call “recovery,” a powerful concept precisely because we had all been told for years that it was not possible.

Once such doors are opened, we now have the room to get on with life -- even introduce a celebration of those parts of us that have nothing to do our “conditions.” Engaging our creativity through the arts has played an especially significant role in helping people find a positive identity in society and for ourselves.

Society honors and venerates the artist, whatever their art or craft -- visual, dramatic, musical, literary, etc. When a person who manages a mental health issue is transformed into an artist, the mental health side of things gets cut down to size, in the eyes of others as well as in the mirror, and the result can be wonderfully liberating and exhilarating. Certainly, the mental health issues don’t go away, but they become appropriately contextualized as what they should be: Just another of the challenges that life presents to us all in one way or another, and just another facet in the complex internal diversity that uniquely defines us all.

 

Richard Krzyzanowski is a former career journalist who came to the mental health field more than ten years ago, following his own experience of a mental health crisis. He currently works as a trainer and organizer—primarily within the mental health "client" community—and is well-known for his advocacy as a member of several state-level committees, including his role as chairman of the California Client Action Workgroup.

 

 

 

 

Wednesday
Feb132013

Commentary

The Black President’s Burden

One of the most insightful—and devastating—descriptions of the Western World’s influence on Africa is Walter Rodney’s 1972 How Europe Underdeveloped Africa. Rodney’s searing accusations against Western governmental and corporate greed, and their role in keeping the resource-rich continent of Africa and its people firmly stuck in the third world, according to Nigel Westmaas, “led to a veritable revolution in the teaching of African history in the universities and schools in Africa, the Caribbean and North America.”

But whether or not Rodney’s treatise changed the teaching of African history, it seems to have had little or no impact upon the policies of the West. No knowledgeable person believes that the Africa described by Rodney ended when the African nations gained independence in the sixties, nor that, today, Western countries are not still using Africa for their own ends while giving no heed to the welfare of the African people.

Both Europe and the US stood by during virtually all of the major  non-Arab country African upheavals of the last two decades, including genocidal wars in Rwanda, Darfur  and Congo at the same time that NATO was  intervening directly in Kosovo and Libya to protect civilians from slaughter. After suffering catastrophic losses following  relief operations in Somalia in 1993, the US appeared to take a hands-off policy with regard to African conflicts.

With new fundamentalist Islamic threats in Mali as well as other areas of sub-saharan Africa, the US is now beefing up its military presence on the continent, with advisors and technology such as drone aircraft. France has troops on the ground in Mali. The US  instituted an official Africa Command, AFRICOM, one of six Commands worldwide, in 2007. Recently it was revealed that the US will station drone aircraft in Niger and already has several other drone bases up and running in Africa.

What this means is that the American foreign policy toward Africa, having been weak on the economic and humanitarian sides in the first place, is becoming increasingly a military policy because of the threat of Al Queda—a development that failed to materialize in response to millions of Africans being killed in genocidal wars only a few years ago. What is the difference? When Africans were killing Africans the outcome was deemed not vital to US interests. But Al Queda poses a different threat. Unlike the various factions involved in the genocidal wars of the past, Al Queda is hostile to the West and to the US in particular. If they overthrew some of the African governments we would lose our access to vital resources.

In a recent interview, Emira Woods, co-director of Foreign Policy at the Focus Think Tank in Washington, pointed this out. Speaking of the militarization of US African policy she said, “It coincides with Africa increasing its significance as a supplier of oil to the United States. AFRICOM stood up in October 2008 just as Africa was actually reaching … 25 percent of the oil input that comes to the US from [abroad] so Africa was increasing its significance not only for oil [and] natural gas but for other vital resources so we have seen the steady increase in the militarization.”

It is a shame that our policies toward Africa are still being dictated by the same underlying agenda that prevailed during the colonial era. The African continent has contributed both natural and human resources to fuel the economies of the West for centuries. The US record with regard to using slave labor before the Civil War and with giant American corporations such as Firestone, which used Liberian rubber, Unilever, which made soap and other products from African raw materials, and Shell, which extracted oil from the colonial period until now, should make us, as a country, particularly eager to help struggling African nations develop, not just become military bases to protect our interests.

We have a President who is half African, whose father was born and lived in Kenya. Yet, Barack Obama has shown no inclination to pursue a new or enlightened policy toward Africa and, in fact, appears committed to heading down the road of further militarization of our African policy. Instead, he should be reexamining that policy to make it benefit Africans, not just America.

The Editor

 

Wednesday
Dec122012

Commentary

French Thought?

 

OK. I’m all geared up for my trip to Europe. I envision myself sitting by the sea in Barcelona, eating tapas and drinking wine, maybe winding through the labyrinthine streets of Lisbon, or stumbling over the crumbling ruins of Rome, perhaps picking my way along the cobblestone streets of Salzburg. But most of all I see myself sitting at a sidewalk table at Les Deux Magots on Paris’ Left Bank and soaking in the ambience of European consciousness and particularly that which is most the European of all, French thought.

To be truthful, my visions of Paris are of Hemingway writing with a pad and pencil in a café, a beer in front of him and critical eyes directed toward the passing pedestrians, hoping not to be interrupted by one of the silly American or English expatriates who only want to gossip. Not that Papa was above a little gossip himself. At other times, I imagine Sartre, drinking coffee or wine, reading and taking notes, hopeful that an adoring young Sorbonne student will recognize him and distract him from his heavy intellectual exercises and burdensome sense of responsibility, then bending over his tablet and writing a trenchant line or two, directing the rest of us to be free but not free of guilt.

So in preparation for my vigils at the sidewalk cafes of St-Germain-des-Prés, I decide I’d better bone up on the latest French thought, which means first refreshing myself on early twentieth century French philosophy and then tackling that which in the last twenty or thirty years has become all the rage. The French debate this stuff the way Americans talk about sports, don’t they? How can I hold up my head at Café de Flore and be ignorant of Foucault, Derrida, and Lyotard and now Marion, Nancy and Laruelle?

I’m no philosopher but I’ve read my share of Wittgenstein, Ryle, Searle, Fodor and Dennett, not to mention, Anscombe and Foot. As a self-styled empiricist, I cut my teeth on analytic philosophy and loved its close ties to science. Being something of a cognitive scientist myself, I could see the value of philosophy in clarifying my ideas and provoking the pertinent questions. And guys like Fodor and Dennett have a breezy sense of humor, making reading them a pleasure.

But what’s going on with the French? Other than Foucault and, refreshingly, Malabou, their philosophy doesn’t seem to refer to anything. It curls back upon itself—analyzes its own language and that of its own incestuous practitioners. Did positivism stop at the shores of France?  Wasn’t Comte a Frenchman? Why do they still criticize ideas I thought no one took seriously (e.g. classical Marx and Freud) and have an obsession with the transcendent and the ideal (spending thousands of words dismissing what I would have thought they never should have considered in the first place)? And why couldn’t the Ordinary Language movement have made at least some inroads into the French intellectual culture, at least to the extent that it would deal with language as it is used to communicate? They only harp on language as form, on reality as language. They only talk to each other. At least they have a social and political conscience, or appear to.

I blame it all on phenomenology.

The Editor

 

 

 

Tuesday
Sep112012

Commentary

The Wild West

 

When will Americans finally become fed up with mass killings? In recent years we have had the Virginia Tech murders, the Gabby Gifford shootings in Arizona, the Batman movie killings in Denver and the shootings at the Sikh temple in Wisconsin.  In addition to these well-known incidents, there have been, in the last ten years, 18 other mass shootings yielding a death toll of 126 people in the United States. The perpetrators of the incidents at Virginia Tech, Arizona and Denver clearly had mental problems. No doubt many of the other gunmen did also. Curiously, these events have raised more of a cry for improved treatment of mental illness than for increased gun control.

 

The US has no more mental illness than the rest of the world, but it does have more guns per person than any other country in the world.* However, opponents of gun control have argued that mass killings occur nearly as often in Europe as in the United States and cite several school shootings in Germany and the tragedy in Norway in which a lone gunman (who was declared  sane) killed 77 people as evidence that gun control does not reduce killing. However, these European incidents merely show that strict gun control does not eliminate mass killings. Isolated incidents, either in Europe or in the United States do not give an accurate picture of gun-related deaths. The U.S. still leads all European countries, except Estonia, in gun related homicide rates and is 28th in the world behind  a number of South American, Caribbean and African countries in this category.** (The oft-cited statistic that the US has the highest rate of gun-related deaths in the world is simply inaccurate. However the rate of gun related deaths and gun related homicides in the US is the highest of any wealthy, developed country).

 

Would stricter gun control laws reduce the incidence of gun-related deaths in the U.S.? The data bearing upon this issue are suggestive but not conclusive. Across all fifty of the United States there is a significant  inverse relationship between whether a state has restrictive gun laws such as a ban on assault weapons, requirements for trigger locks and safe storage of firearms laws and rates of gun-related deaths.*** There is also a robust positive relationship across states in gun ownership and gun-related homicide rates.**** A well-known study found that jurisdictions with “right to carry” concealed weapons laws had lower crime rates than other jurisdictions. ***** However other large-scale statistical analyses comparing US counties that had passed laws allowing the carrying of  concealed weapons and those that hadn’t, failed to show any effect of such laws on gun-related homicides or crime rates in general.****

 

All of the above data are correlational. Higher education of its citizens, liberal orientation of its electorate and higher wealth (as well as higher percent of immigrants!) are all positively correlated with low gun-related death rates in the United States and it may be that those characteristics are what lead voters to pass stricter gun control laws, while the laws themselves have no direct causal effect upon death rates.  Violent crime rates, both across the United States and in many of the states which passed right to carry laws were decreasing when right-to-carry laws were passed and further decreases may have just mirrored a national trend.

 

So if we can’t say anything definitive about the causal influence of stricter gun control on gun-relate death rates, what can we say about the need for stricter gun control? More importantly, what suggestions can we make for reducing the rate of gun-related deaths in the United States? Well, people can’t use guns to kill people if they don’t have access to guns, so restricting access to guns ought to reduce gun-related deaths. The problem is how to do it. Mexico is an example of a country in which strict gun sale laws exist alongside outrageous rates of gun ownership (mostly via weapons smuggled in from the US). So strict gun sale laws need to exist, but alongside strict enforcement of those laws. This would include both a total ban on ownership of assault weapons and (as many countries have in place) a demonstration of a need to own a gun that does not include self-defense, as well as strict background checks on those who buy guns for legitimate reasons such as target practice or hunting.

 

Probably as much as by gun proliferation, the US is plagued by a culture of violence, which celebrates violence as a method of solving problems. We equate our country’s power and prestige with the strength of its armed forces, rather than with its intellectual or creative achievements.  Our media is saturated with images of violence in films, books and music.  When this veneration of violence is combined with the highest rate of gun possession in the world and a stubborn resistance to curb gun possession because of a belief that it is one of our inalienable rights, we find ourselves exactly where we are today—the most violent of the world’s well-developed countries.

 

The Editor

 

*www.reuters.com/article/2007/08/28/us-world-firearms-idUSL2834893820070828

**http://www.guardian.co.uk/news/datablog/2009/oct/13/homicide-rates-country-murder-data

***http://www.theatlantic.com/national/archive/2011/01/the-geography-of-gun-deaths/69354/

****http://www.nber.org/papers/w7967.pdf

*****John R. Lott and David B. Mustard, 'Crime, Deterrence, and Right-to-Carry Concealed Handguns'. The Journal of Legal Studies, 26 (1997)

 

Thursday
May312012

Commentary: In Memoriam - Lawrence Howard; Who Wrote Shakespeare's Plays?

In Memoriam

Lawrence Howard

 

I first met Lawrence Howard when he was a psychology post-doctoral intern with the county mental health department. I was a neuropsychologist teaching a course in testing to county employees and interns and he was a psychologist trained in cognitive science who had decided to become a clinician. Two things struck me about Dr.Howard at  that time: he was brilliant and knowledgeable and he was in a wheelchair, paraplegic from a childhood illness. Later, when I was forming a teaching service within the county mental health department, I hired Dr. Howard (I always called him “Howard” although others called him “Larry”) as a half-time teacher. The other half of his time was being spent as a professor of cognitive science at the University of California, Irvine.

 

Howard and I became both colleagues and friends. When he later moved into a clinical position for the county, working with children, I kept up sporadic contact with him, mostly through trading videos or having coffee together. He had a great love of film. Anyone who frequented the University Cinema across the street from UCI would have run into him often.

 

After he and I both retired from county service, I often ran into Howard at the coffee shop across from the university and he and I talked about many things, but mostly politics (he was a liberal who raved even more than I did and who had edited a book on terrorism) and films.  I convinced him to review films for Lost Coast Review and previous issues have been fortunate to include a few of his reviews. When I last had coffee with him, Howard promised me another review for this issue. Then he failed to show up at the coffee shop for a couple of weeks and didn’t answer his phone. I found out through an announcement from the UCI Disability Services Department, where he had taken part-time employment, that he had died.

 

Howard taught many university students, he mentored interns in our county internship program in psychology and he counseled many clients. He was a gentle, intelligent, interested, caring human being and he will be missed by many whose lives he touched.

Casey Dorman

 

Who wrote Shakespeare’s Plays?

Shakespeare and DeVere

 Some of us wonder about the fate of D.B. Cooper. Some of us continue to question whether Lee Harvey Oswald or Sirhan Sirhan were really lone assassins.  But perhaps the greatest mystery of our time is the question of who wrote the plays and poems attributed to William Shakespeare. Unless informed by films such as Anonymous, most of the public may not even be aware of the controversy concerning Shakespeare’s identity. In a nutshell, the William Shakespeare (most often spelled Shakspere) who was born and died in Stratford-upon-Avon, never attended a university, may not even have attended grammar school, never claimed to have written any poems or plays, wrote no known letters, scrawled his signature as though he was either illiterate or debilitated, and left no books or copies of his own works when he died. Seizing upon these oddities as well as upon the lacuna of information about the man, several scholars and amateur sleuths have proposed that Shakespeare’s great works were actually the creation of someone else. The prime candidates have been the philosopher, Francis Bacon and the poet and courtier, Edward deVere, the 17th Earl of Oxford. Both of these men lived lives which included the learning, the travel and the talent to produce great works of literature.

 

The purpose of this discussion is not to take sides in the Shakespeare controversy.  I cannot deny that the debate is an interesting one, and I, purely out of curiosity, I have immersed myself in the arguments from both sides. What  impresses me about these debates has been the exaggerations, the factual omissions, the obfuscations, and the blatant distortions promulgated by both sides as they have made their arguments. Furthermore, the degree to which such misstatements enter the polemic in which any particular author engages, is not at all mitigated by the writer’s scholarly credentials.

 

In one of the most entertaining and controversial books on the subject: Charlton Ogburn’s (1984) The Mysterious William Shakespeare: The Myth and the Reality, the author refers to both sides as constituted of “true believers,” a label he disparages but which patently applies to himself as much as to anyone else to whom he refers. Although I have been entertained, when I began looking for some evidence to resolve the question my own reaction turned to dismay, for I was hoping to find someone who exercised skepticism when evaluating both sides of the question, someone who used scholarly restraint when invoking  words such as “probably,” “no doubt,” “without question,” or “most certainly” in his or her arguments rather than treating such tentative assertions as fact. But alas, nary an evenhanded treatment of the subject appears to have been written. Both sides argue from tenuous foundations, claim absolute certainty, and revile the assertions of their opponents with disdain bordering on insult.

 

Why is this controversy, which, anyway occupies the attention of a minority of thinking people, important? Not because of the position of Shakespeare within the canon of English literature; for as several authors have suggested, the body of work itself is surely monumental enough to stand on its own without such additional biographical information, although to finally know more about the playwright, or to find that, for instance, he was a constant collaborator when he wrote or that the plays were actually written by several different authors, would change our view of the character of human literary genius.  But to me, what makes the issue important is its ability to demonstrate the fallibility of human reasoning, even by some of our race’s finest minds, when those minds feel they have a stake in the outcome of the question. Although mainstream academia mostly rejects the identity question, a few academicians have deigned to participate, mostly to denigrate both the motives and the qualifications of those who have raised the issue despite the fact that those who have weighed in on either side of the debate represent men and women who have often devoted years, occasionally even lifetimes, to scholarly study of the topic, as professors, as actors, as critics or writers, and almost all, as devotees of Shakespeare’s works. Yet, the influence of such lofty exercise of one’s mind, such earnest devotion to scholarship and, in some cases, to the values of academia, which, one would assume, would include truth above almost everything else, appears to carry little weight,  as it routinely falls victim to bias and personal preference to an extent that ought to shake everyone’s faith in the wisdom of human expertise.

 

The entire enterprise of challenging or defending Shakespeare’s identity is yet another example of human reason being trumped by that most pernicious of ways of thinking we refer to as “belief.” If some of the world’s most brilliant minds cannot maintain objectivity on as presumably dry a subject as Shakespeare, how can we put any faith in the ratiocinations of our so-called experts on such things as global warming, terror threats, Iran’s nuclear ambitions, economic policy, or any of the other issues that so quickly become the substance of our foreign and domestic debates? Is it any wonder that we misjudged Iraq’s threat to us, or that the CEO of J.P. Morgan Chase made such a drastic error with regard to fiscal risk management, or that Europeans can’t figure out how to solve their debt crises, or that it’s nearly impossible to find out the truth about the advantages or the risks involved in adopting Obama’s health care reforms?

 

Is it really as difficult to be cognizant of the truth when defending one’s  position as it appears to be?  I have not given examples from cognitive psychology, which generally provides little evidence that people use logic or reason to come to decisions (except perhaps in solving puzzles or doing mathematics), especially with regard to real-world problems. But I am struck by how clearly the debate about Shakespeare’s identity gives one more example of how easy it is for us to abandon our adherence to truth and logic when it comes to something in which we “believe.” It is no easy task to be objective, to examine both sides of an issue equally, to weigh evidence rather than opinion. It flies in the face of human nature. But without doing so, our debates become silly and our conclusions at best, weak, and at worst, dangerous.

 

The Editor

 

Sunday
Feb262012

Commentary

Gearing Up for Battle

 

 In a recent New York Times article, Scott Shane discussed the length and cost of the American wars in Iraq and Afghanistan, concluding that, “The outcomes seem disappointing and uncertain.” Then he raised the question, “So why is there already a new whiff of gunpowder in the air?” A Pew Research Center poll in February found that 58% of Americans favored the use of military force to prevent Iran from developing nuclear weapons.

 

Shane’s article goes on to examine the inconclusive pronouncements of the IAEA regarding whether Iran’s nuclear ambitions are military, the similarly tentative stance taken by the American Director of National Intelligence, the cautionary words of both Barack Obama and the Chairman of the Joint Chiefs of Staff, the provocative words and behavior of Israel and the competition among the Republican Presidential candidates to be the toughest opponent of Iran and the friendliest toward Israel. His analysis conveys the current situation accurately, but it fails to explain the readiness of so many Americans to re-enter the world of international military conflict so soon after such dissatisfying battlegrounds as Iraq and Afghanistan.

 

As an observer of the American scene, the answer to Shane’s question is obvious to me: America all too easily and too often celebrates war and its warriors. President Obama welcomed the last troops home from Iraq with the words, “your service belongs to the ages; it was an extraordinary achievement." Come on. We outgunned a third-rate army and fought insurgents to a standoff. No sporting event in America begins without homage to and often a prayer for the soldiers fighting the wars overseas. Curiously, we routinely invoke the name of a man who advised us to love our enemies and turn the other cheek, when we pray at these events. Many Americans regard our President’s, if not our nation’s, finest hour as the one in which a Navy Seal assault team shot and killed an unarmed man. Huh?

 

We are faced with the dilemma that, while most Americans now agree that neither the Iraq nor the Afghanistan war was vital to national security, the men who fought in those wars voluntarily risked or gave their lives out of a sincere belief that they were protecting all of us here at home. We want to honor and thank these people, these soldiers, for their service.  But in the process we make several mistakes, which, if left uncorrected, will lead us into yet another unnecessary and fruitless conflict.

 

We do not “owe our freedom” to these soldiers. Our freedom was never in jeopardy, at least not from Saddam Hussein or the Taliban. Each soldier’s personal decision to risk his or her own life in order to protect ours was both brave and noble; his or her engagement in warfare to carry out that decision was not.

 

It is not always patriotic to fight a war. It may be, but it may not be if the war is costly and unnecessary. Military actions by our own armed forces do not always (perhaps not even usually) promote freedom. As often as not they protect American interests - either economic, strategic or political - often at the expense of the freedom of those people who live in the countries in which we fight (e.g. Vietnam, Iraq, Afghanistan).

 

The American Armed forces are not invincible, despite claims that they are “the most powerful fighting force in the history of the world.” We lost in Vietnam, won in Grenada, sort of won in Iraq and fought to a draw in Afghanistan. All of our opponents have been second-rate or less powers. No American politician or media personality is going to admit this. They all will continue to compete at who can beat the drum for military glory the loudest, uniformly equating their support of our troops with love of country. And the common man will not question them.

 

We cannot promote irrationality when it comes to celebrating war and expect to make rational decisions about going to war. This is why we will go to war, unnecessarily, again.

The Editor

 

 

 

Friday
Oct142011

The Theory of relativity

Do neutrinos travel faster than the speed of light? Probably not, but the claim by a European scientific team at the European Organization for Nuclear Research, known as CERN, has generated not just skepticism but also fervor to attempt to replicate the finding by an independent site. Why the intense interest? The excitement is because the premise that nothing can travel faster than the speed of light is a cornerstone of Einstein’s special theory of relativity (remember    e=mc²?  The c is the speed of light).  If anything can travel faster than the speed of light, then one of the main premises of Einstein’s theory and the modern conception of the universe is incorrect.

Apparently neutrinos are notoriously difficult to measure (this measurement required a 454 mile long particle accelerator, which spanned two countries, to track something that traveled the entire distance at 60 nanoseconds faster than the speed of light). Most scientists regard the result as a measurement error, but that will not stop them from conducting further experiments in case they are wrong. Healthy skepticism following unexpected results is part of science (remember Thomas Kuhn?), but so is the urge to experiment further to test the limits of science. More importantly, the need to revise theories in the face of new evidence is what sets science apart from religion, faith, belief and old wives’ tales, which govern most of human behavior.

“It’s just a theory,” we say about evolution, global warming, the Big Bang, etc. and half of us dismiss these theories’ claims as a result of that statement and half of us deny their theoretical status and claim them as “fact,” or  “settled science,” or what “science has found.”  Certainty is a matter of predictability within accepted limits of error and a matter of consensus by either our peers or those with the qualifications to know.  Certainty is not independent of the minds of the knowers or observers.  “Facts,” are things of which we are certain, so is what “science has found.” It can all be overturned by new evidence. Then we will be certain of something else.

We must live our lives based upon our knowledge of the world, including the physical, social, political and economic “facts” available to us. These facts will change, either within our lifetimes or in the future. All facts do. Everything we base our decisions upon now will turn out to be different or wrong sometime in the future. But we must still make decisions.  Each of us can decide to base these decisions upon what the “best evidence” of modern science has taught us is most likely to be the case (e.g. rationalists), or upon what our belief systems  tell us is true, independent of any scientific evidence (e.g. religionists), or upon our value systems taught to us by our culture and to which we subscribe through personal commitment (e.g. patriots), or upon our personal proclivities without thought beyond our immediate satisfaction (e.g. shoppers), or upon just thinking and doing what everyone else does (e.g. sheep).

Most of us base our decisions upon each of these different paradigms at different times and in different situations. Is one basis for decision making better than another?  Yes, I think so. I prefer the scientific, rationalist one. But probably no single basis  for making decisions is the most useful one in all situations.  In my opinion it is best to be acutely aware of the relativity of the theory (belief, religion, preference) you are using and its tenuous claim to certainty. Then, with such awareness in mind, make your choices and then live with the anxiety of knowing that the basis you used for making your decision was in some sense, arbitrary, in many ways, fallible, and in the long run, just another choice you made. If you want know how well this works, see Noel Mawer’s review of the fiction of Camus in this issue of this review. Camus got the idea.

 

How the United States Celebrates World Philosophy Day

            World Philosophy Day 2011 will take place on November 17 at UNESCO Headquarters in Paris. During this event, which has occurred annually since 2002, UNESCO will provide philosophers, researchers, teachers and students as well as the general public with a wide variety of conferences on various subjects, such as the equitable sharing of scientific benefits, philosophical meanings of the political upheaval in the Arab world, the role and the place of women philosophers in the exercise of thinking, philosophical practices with children, philosophy and equal opportunities at school.

            Why is philosophy important to UNESCO?

            The first clause of the UNESCO Charter’s Preamble states, “Since wars begin in the minds of men, it is in the minds of men that the defenses of peace must be constructed.” According to their own statements, “For UNESCO, philosophy provides the conceptual bases of principles and values on which world peace depends: democracy, human rights, justice, and equality. Philosophy helps consolidate these authentic foundations of peaceful coexistence.”

            And now, the United States, which provides 22% of the financial support for UNESCO, has withdrawn funding for the organization. The reason is because UNESCO voted unanimously to admit Palestine into its membership.

            U.S. State Department spokeswoman Victoria Nuland stated, "Today's vote by the member states of UNESCO to admit Palestine as member is regrettable, premature and undermines our shared goal of a comprehensive just and lasting peace in the Middle East. The United States will refrain from making contributions to UNESCO."

            White House spokesman Jay Carney said, "Today's vote distracts us from our shared goal of direct negotiations that results in a secure Israel and an independent Palestine living side by side in peace and security."

            In February of this year the United Nations Security Council voted on a resolution that condemned all Israeli settlements in occupied Palestinian territory as illegal, called for Israel and Palestine to follow the Road Map for Peace plan, and for both parties to continue negotiations to end the Israeli-Palestinian conflict. Over 120 U.N. member states supported the resolution.

            Despite votes in favor of the resolution by all the other 14 members of the Security Council, the U.S. vetoed the resolution. US Ambassador Susan E. Rice, while defending the U.S. veto, admitted that, "We reject in the strongest terms the legitimacy of continued Israeli settlement activity.... Continued settlement activity violates Israel’s international commitments, devastates trust between the parties, and threatens the prospects for peace.…"

            The U.S. has now opposed Palestine’s admission to the U.N. as well as to UNESCO. This last action by the U.S. is the most devastating because it is accompanied by withdrawal of financial support for the UN cultural and scientific agency.

            As the Jordanian Parliament stated in response to the U.S. announcement, “The US decision ..., was taken to punish UNESCO for the member states’ democratic and just voting to grant Palestine what it deserves. Washington’s move is strange because the United States tries to convince the world that it is a protector of democracy and freedom.”

            The actions of the United States in these matters are not the actions of a country that claims to be a defender of freedom   and a champion of the democratic process. The U.S. withdrawal of financial support for UNESCO is the action of a petulant bully that blindly follows a policy of fearing to alienate its strongest Middle East ally, Israel (which deserves American support, but not in this way)  and fears to anger Israel supporters here at home even when it admits that Israel’s actions are wrong. The U.S. pronouncements on these issues are hypocritical and disingenuous and it is time for America to place truth and morality ahead of political and strategic calculations.

The Editor

 

Wednesday
May182011

The Death of Osama Bin Laden

Like most Americans, I feel relief knowing that Osama Bin Laden is dead because I think that this will mean that the long term risk from terrorism is lessened, since one group of terrorists, al Queda, has lost its symbolic leader. But the fact that we ended up having to kill him in a military operation means to me that we have failed to try to communicate with him or his group or understand them and have chosen to meet violence with violence. Despite the relief we may have felt on hearing the news of his death, his death was not something to celebrate. The West has dismissed the beliefs of Islamic Fundamentalists as evil or crazy and not to be tolerated by a civilized society. We have not asked ourselves why some people believe as they do and have completely ignored Western actions such as placing military bases in their holy lands, backing cruel dictators who suppressed their own people but supported our policies, invading Muslim countries for misguided reasons, killing of countless Muslims either as enemy combatants or as "collateral damage" and the great impact of Western culture upon the value system and behavior of their young people, all of which might seem legitimate reasons to them to want to attack us. I deplore celebrating anyone's death and especially a deliberate, violent death and I regard the manner of such as killing as a failure of us to look for the common humanity in others and to seek a peaceful, civil way of solving our differences.

It is time that we Americans became more honest in our assessment of world events instead of seeing them only from our own perspective. Pakistan has been critical of the U.S. for violating its sovereignty in the intrusion into their air space and the attack on their soil that resulted in Bin Laden's death. American reaction has been to justify these actions because of the magnitude of the target and the role that ridding ourselves of al Queda's leader plays in our own self defense. These certainly are legitimate factors, but we also need to take seriously the complaints against our actions made by Pakistan. If the shoe were on the other foot and the Pakistani's identified an enemy of theirs in the United States, would we stand for them sending in an armed team to kill that enemy on our soil? Would we allow any other country in the world to violate American air space with the intent of mounting a military attack? Of course we wouldn't, but we justify doing those things because we are America and we were wronged by al Queda and Bin Laden - and we know that we can brush aside Pakistan's objections.

In Libya NATO is present to fulfill a United Nations resolution to protect civilians. Our news channels on television have reported on the attacks of Ghaddafi's military forces on civilian targets, while at the same time showing pictures of so-called civilian rebels with tanks, artillery and often in uniforms (ironically, NATO apparently believed its own rhetoric. They accidentally bombed rebel tanks because, as they said, "We didn't know the civilian rebels had tanks."). Both our news channels and our government have cheered the rebels on as they have  tried to mount counteroffensives to take Libyan cities and NATO air power has offered air support for such operations. In the guise of protecting civilians, NATO struck Ghaddafi's residence with a missile and killed his youngest son and three of his grandchildren. These actions may be necessary if we want the rebels to win the civil war and remove Ghaddafi from power, but let us not say that they are being done to protect civilians when they, in fact kill civilians, including children. Americans, including our President, are unanimous in saying that the Libyan leader must be removed, but shouldn't we remember that this is someone else's country and someone else's civil war and the United States cannot, in my mind, claim a moral superiority that allows us to willy-nilly meddle in other countries' affairs and remove their leaders because we don't like them. If we are going to continue to do this, and it appears as if we are, then let us at least be honest about what we are doing and not disguise our actions as moral when we really mean that we are just powerful enough to do what we want to do.



Thursday
Feb102011

Is One False Story as Good as Another?

Is one false story as good as another? The obvious question is, ‘good for what?” I have argued vociferously in favor of verifiable ideas and against fanciful ones, particularly religious ideas. My argument has been that I don’t like believing in something that requires outright denial of what appear to be obvious facts and I don’t like believing in something that requires a suspension of either logic or critical judgment.

But aren’t all so-called facts only true from a particular perspective and don’t many of the beliefs of modern science derive from scientific and mathematical operations that are obscure and inaccessible to most of us? This has probably been true of most of the great discoveries of science for centuries.  Newton’s assertions about gravity were based upon a mathematics few people of his day could understand and were no more obviously true to the average person than the counter assertions about ether or elements that caused attractions between bodies. Copernicus’ heliocentric theory was also based upon complex mathematics (some of which turned out to be wrong) and was more counter intuitive than the common man’s observation that the earth was stationary and the sun and the stars revolved around it. Even Darwin’s theory of evolution was not a story about the origin of species that coincided with the observations of the average person and his version of natural selection was less obviously true than his contemporaries, such as Lamarck’s ideas about inheritance of acquired characteristics.

Even now, we can debate about the big bang theory of the origin of the universe and champion it over, say the continuous creation theory without, unless we are astrophysicists, understanding either the mathematics or the science behind either theory. Yet we are comfortable believing that one or the other theory is true and claiming that the biblical creation theory is clearly false, when, for most of us, the only evidence we have is the opinion of scientists.

Newton’s theory of gravitation was shown to be only approximately true for a certain range of phenomena when Einstein developed his theory of relativity and Einstein’s theory was then shown to apply satisfactorily only for another limited range of phenomena when Bohr and others developed quantum theory. Sigmund Freud turned the field of human personality and behavior on its head when he developed his psychoanalytic theory of unconscious motivation. Nowadays, Freudian theory is generally taken to be an outmoded, unnecessarily anthropomorphic story about how thoughts and behavior emerge from a person and has been replaced by neuroscience theories of the functioning of systems of brain cells.

It is difficult not to believe that nearly everything we “know” now, in the early decades of the twenty-first century will turn out to be either wrong or limited in application or perspective sometime in the future.  Does it really matter, then whether we believe in “scientific” explanations of ourselves and the world we live in or “religious” ones?

 Most religious beliefs are based upon premises that are difficult to prove, scientifically, at least in the world in which we now live. The assumption that there is an all-knowing consciousness either directing, observing or judging everything in the universe is not something that can be proven or disproven. That some aspect of our own consciousness persists after we die is another unprovable and undisprovable belief. Even the personal feeling that a particular person has of being “in touch with,” “possessed by,”  or  “at one with” a supreme being, may be the kind of culturally generated quality of one’s consciousness that, for someone within the culture that generated such a feeling, is impossible to doubt or to examine objectively, since one cannot stand outside of one’s own culturally determined personality to examine oneself and a second party’s observations about such a phenomena cannot overrule what one experiences first-hand.

Karen Armstrong’s remarkable book, The Case for God, makes it clear that there have been times and places in history when, at least in the view of some leading thinkers, religion and science were not at odds. St. Augustine, for instance, favored a “principle of accommodation,” which asserted that God presented his revelations to humans in the language that fit their understanding at the time. Thus, according to St. Augustine, Biblical stories were not literal descriptions of objective events that could present a challenge to scientific explanations of those same events and new scientific discoveries and explanations of nature presented no threat to religion. Certainly Copernicus, who was highly religious and, in fact, presented his ideas in more or less religious terms, subscribed to this view. To Copernicus, the beauty and symmetry of a heliocentric system of the sun and the planets was a testament to God’s wisdom and glory, not a challenge to it.

Galileo did not share Copernicus’ religious view of the heliocentric theory. In fact, he saw no conflict between science and religion in general or between scientific verification of Copernicus’ theory, which he thought he had provided, and Biblical stories. Galileo believed and argued as such, that much of the contents of the Bible was poetic expressions of religious truth, not to be taken as fact in the way that scientific evidence provided fact. As Karen Armstrong points out, such a view was espoused in St. Augustine’s principle of accommodation and was acceptable to the Catholic Church until around the 16th century, but after that, the Church became more dogmatic and pushed for a literal interpretation of all of the words of the Bible as well as accepted assertions of the Church that were based upon Aristotelian philosophy.

Even today, with “Creationism,” and fundamentalist religion gaining greater favor, we see religious narratives being portrayed as plausible alternatives to scientific narratives (e.g. evolution), rather than as two different ways of expressing ideas, a scientific and a poetical one, that cannot be in conflict, since they do not belong to the same category of explanation. Taking the words and stories of the Bible literally, would be like taking Christina Rossetti’s poetic words, “My heart is like a singing bird,” literally and being forced to disavow centuries of discoveries about human anatomy. Science provides one kind of narrative about the world and human existence while religion provides another, but they are not equivalent nor should they compete with one another.

Scientific theories, discoveries and explanations do not provide a guide to behavior because, even if they can predict the outcomes of different actions, they do not include values that would lead to favoring one outcome of an action over another. Religious beliefs do include values, so one could base one’s decisions about how to act upon his or her religious beliefs, but not upon one’s scientific beliefs.     

 Because religious beliefs express values, these values may differ between different religions. Hindu’s place a value on not injuring other creatures and generally refrain from killing animals for their meat, as do monks and some followers of Mahayana Buddhism, for similar reasons. Monks of the Theravada sect of Buddhism, do consume meat, but do not kill animals themselves to provide the meat. Christians, Jews and Muslims have no prohibition against killing animals and the latter two religions have a history of animal sacrifice.

The values inherent in a religion are not always easy to discern nor do they always determine behavior. The inclusion of a ‘golden rule’ type dictum to behave toward others as you would want them to behave toward you, is often cited as occurring in nearly all of the world’s religions. However, this is a dictum not often followed by members of most religions and even by the religions’ leaders. Many religions, including Buddhism, Hinduism, Christianity and Judaism prohibit killing of human beings, yet followers of these religions have waged wars, mounted terror campaigns and may even have sanctioned legalized killing of criminals, “witches,” “blasphemers,” and members of other religions, often in the name of their own religion.

So does the story one tells oneself matter? Most likely it does and if what matters are the values that one tries to use to guide one’s behavior, then religion is a potent source of such values and may offer a ready narrative about the person, his or her relationship to others and to a supposed god. I have often wondered whether, when one kills in the name of religion or when one ministers to others in the name of religion, the religion is the cause of such behavior or if it is just an after-the-fact rationalization for the behavior. The answer is probably that it may be either one.

If people do not have religious beliefs, what is their source for the values to guide their behavior? Certainly philosophy has offered candidates that rival those put forth by any religions. A good example would be Kant’s ‘Categorical Imperative’ to "Act only according to that maxim whereby you can, at the same time, will that it should become a universal law." In other words, act in such a way that you would desire that all people acted similarly. Because such a rule still leaves room for the person who, say desires that all people seek to kill members of a certain race or religion, Kant added a second and a third imperative such that one should, “act in such a way that you treat humanity, whether in your own person or in the person of any other, never merely as a means to an end, but always at the same time as an end,” and, “therefore, every rational being must so act as if he were through his maxim always a legislating member in the universal kingdom of ends.” These three imperatives, taken together insure fair and humane treatment to any and everyone.

The work of Franz de Waal with chimpanzees demonstrates that some of what we refer to as moral behavior is also evident in primates and may be the basis for human morality. Altruistic behavior, at least directed toward members of one’s own family or colony is not uncommon in chimps, bonobos and capuchin monkeys and involves some capacity for empathy. Such behavior appears to be tied to the emotions in these primates. Now primates do not have external language of any degree of sophistication and presumably also lack an internal dialogue to guide their behavior. Yet they show some elements of the types of behaviors that we associate with being guided by a philosophical or religious stance. Does this mean that human philosophic-religious narratives are rationalizations for ways of behaving that are genetically wired into us when we are born?

In his book, Primates and Philosophers: how morality evolved, De Waal and his commentators agree that primate “moral” behavior is mostly limited to the circle of family and colony members and does not include a “universal” notion that includes all members of the species or all living things, which is something many human philosophies and religions do include. Of course, despite such inclusions, religions particularly have provided a source of antipathy toward other religions, nations and races and a marked lack of empathy toward one’s enemies, which is usually described as seeing people who are different from oneself as “less than human.” There is often marked discontinuity between the universal morality espoused in the writings of a religion and the behavior of the people who espouse that religion.

The religious or philosophical narratives we develop and try to live by may have their source in behavioral and emotional tendencies that are part of the genetic endowment of our species. In this sense, they are a sort of grand, after-the-fact rationalization for how we find ourselves feeling and behaving. It may be a limitation of that genetic endowment that such moral tendencies are directed mostly toward those we see as similar to ourselves. We can only overcome this tendency by creating narratives that are more inclusive and universal. So far, in my opinion, the world’s religions, as well as the national identities that have been developed have not been successful in extending the tendency to act morally to people’s interactions with people whom they see as unlike themselves. In some cases both religious and national narratives have fostered suspicion and hatred toward those who do not share one’s religion or nationality. It is time to assess whether such insularity is endemic to religion and national identity and if so, to develop narratives based upon different premises.                                                                     

 

 

 

 



Sunday
Feb062011

Which comes first, consciousness or behavior?

Philosophers from Paul Ricoeur to Daniel Dennett to Owen Flanagan, to name only a few, have posited that the sense we have of ourselves is constructed in narrative form, that is, in the from a story, with ourselves as the central character. We are, simultaneously, actor, story teller and listener to this story. As actor, we are aware of the previously occurring events and plot and have a general idea of where we believe the story is going, but at any particular moment we may be reacting to events that had not been considered or had been poorly estimated and both the events and our reactions to them only become incorporated into the story after-the-fact.

To me, the nagging question has always been how much of the coherent, story-like construction of our experiences occurs after the fact? I don’t find it difficult to conjecture that our normal way of operating is to construct our conscious experience and thus our narrative construction of events after we have already acted. Benjamin Libet’s research in which conscious decision making actually followed evidence of the brain’s initiation of actions, gives credence to this conjecture.  To the extent that our conscious experience follows our behavior, we can see it as a rationalization of our behavior, a putting the behavior (probably with some distortions of memory about the behavior) into our coherent narrative.

I can see two objections to the argument that consciousness always follows behavior. One, of course is the argument I have made before that we often do plan what we are going to do and then, in fact, follow that plan. When I get up in the morning and pick out clothing to wear today that is appropriate for the activities on my day’s schedule, I am making decisions based upon a narrative about my day’s activities and the appearance I want to present, which in turn is based upon my narrative about who I am, what people will think when they see me, what events will impinge upon me, etc. It is less plausible that I choose clothing and make up the story of why I did so after the fact.

The second objection is that the argument that consciousness follows behavior and is a reaction to it is based upon an assumption that behavior (and the brain related events that support it) is an event and conscious experience is another event, with one causing the other. Actions, however, as Aristotle pointed out, are meaningful behaviors.  Brain events can cause other physical events but they cannot, by themselves, result in a meaningful action. That action achieves meaning as a consequence of it being situated within a narrative. The narrative itself may precede, accompany or follow the behavioral component of the action. My flinging my arm out and knocking a lamp from a table as a result of my having been startled by the sound of gunfire behind my back is a different action than me doing the same thing as a display of my anger. But the two actions are not just different because of the after-the-fact interpretations I give to them; they are different as they are occurring. The behavioral sequences involved in the actions are orchestrated differently because of the conscious elements that precede and accompany each action. Thus behaviors themselves have a narrative quality to them, which occurs both prior to and  at the same time as the behaviors are occurring and may determine the quality of the behavior (do I hit the lamp with enough force to break it or do I lessen the blow so that it shows my anger but causes less destruction?).

The answer to the question of whether narrative consciousness precedes, and is a causal factor in behavior or whether its causal role is only an illusion (in fact a story we tell ourselves) and the narrative consciousness is always after-the-fact may be unanswerable. Both situations appear to happen at different times

 



Wednesday
Jan262011

In the aftermath of violence

The tragic killing and wounding of people in Tucson by a young man who apparently suffers from mental illness loosed a storm of verbiage and opinion from politicians and the media.  Everyone seems to agree that the word, “senseless” applies to the shootings. Some people blame the volatile political rhetoric, which has been cited as creating an atmosphere that encourages violent expression of emotions as a response to political dissatisfaction. Others dismiss such accusations and point to the gunman’s obvious mental disturbance as the sole cause of the tragedy.

There is no question that we live in a society in which violence is not only prominent, but celebrated. Video games, movies, and entertainment such as wrestling and UFC portray raw demonstrations of violence as evidence of strength and courage. No one is more honored in our society than its soldiers who risk their lives to go to war, ostensibly to defend the country, although none of our opponents in these wars has posed a direct threat to the United States since Japan in World War II. In the aftermath of the Arizona shootings, I am sure that both the federal and the Arizona state governments will clamor to gain jurisdiction over the case, each of them arguing that it should be the one that is allowed to kill the defendant if he is found guilty.

Although there is no dearth of finger pointing in the effort to assign blame for the Arizona shooter’s actions, in the end we will only be left with conjectures and opinions. If the shooter turns out to be psychotic, does that mean that the content of his delusions and thus his target was not influenced by the events, discussion and ideas that were active around him in the society of which he was a part? Who knows? Certainly the young man’s access to a gun was a societally determined issue. And, although there will no doubt be a clamor for greater  control over access to guns by people with mental health problems, such a discussion will obscure the real issue of access to guns, particularly those such as the handgun used by the suspect, which are clearly not meant for hunting wild game. One of the sillier comments made in response to the shooting was by psychiatrist E. Fuller Torrey, a world-renowned expert on schizophrenia, who claimed that the real issue in such a set of killings was the failure to treat mental illness, not access to semi-automatic guns. He pointed out that, even without use of a gun the killer could have injured or killed the congresswoman by using a knife or other means. Really? And would he have been able to kill six other people and wound another 19 in less than two minutes if he was only wielding a knife? Securing adequate treatment for everyone in the country who has mental illness is much more daunting a task than restricting access to semi-automatic weapons.

If we, as a society are to reduce the likelihood of events such as the one in Arizona happening again, or the other less newsworthy but equally tragic shootings that happen all too often in our neighborhoods and even in our schools, then we need to address the issues that contribute to these events. We cannot deplore the use of violence for “senseless” reasons and celebrate its use for settling disputes, for entertainment and as a form of punishment and retribution administered by the government. We cannot continue to separate easy access to the means for violence from the use of such means to commit violence, especially with such trite sayings such as, “guns don’t kill people, people kill people.”



Sunday
Mar212010

On to Social Criticism

I find myself amazed to realize that it has been over eight months since I entered anything into this blog. I have been spending most of that time either consumed in my “real’ job, or writing or reading fiction. My personal quest for the absolute perfect way to express a thought in writing takes the form of writing novels, short stories or poetry, more often than writing philosophy. Philosophy, even verbal philosophy, has an affinity to mathematics in its requirement for precision, while poetic license is just what it says it is:  something applicable to poetry and fiction. I am enamored with the use of words more than the generation of ideas (a fact which even disserves my fiction, since I am challenged to generate plots, and more interested in refining the way I say things).

Nevertheless, I’m back to writing philosophy. What I’ve written so far provides ample justification for the claims that: 1) we often don’t know why we do things – i.e. our true motivation is obscure even to ourselves, 2) we think we know why we do things but that is just an illusion we create, perhaps as an after-the-fact rationalization, 3) there are some instances when our conscious decision- making plays a causal role in determining our behavior, and 4) even when we are making conscious decisions, we are not often logical, by any philosophical definition of logic.

I should add that none of the above implies that we don’t make decisions, almost continuously, but the choices underlying those decisions are not made consciously.

I have also, I think, given a cogent argument that the conscious decision making that has the greatest effect on our behavior is of a  big-picture type, which allows us to arrange our lives in such a way that our daily, habitual behaviors will fall in line with our overall value systems.

Can we then, choose our value systems?  I said previously that, our value systems come from, "what we were told by significant others, such as our parents; what we learned in our education, formal and informal, within our culture, what our religion taught us, what we learned from unique experiences (encounters with specific individuals, events that are unique to our own lives, such as attaining specific goals, crushing disasters and defeats, love affairs, etc.).Many, if not most, value systems include prescriptions for behavior which will increase the likelihood that the person who subscribes to that value system will actually be able to behave in accordance to the values espoused by it. Religious systems are good examples of this. Moslem and Catholic religions are known for their emphasis upon prohibiting engagement in experiences that would make it difficult to follow the values of the religion. That is why Muslim women do not expose much of their bodies and the Catholic Church discourages attending certain movies. Fundamentalist Christian religions tend to frown on listening to scatological music or looking at risqué pictures, or allowing even parts of society to sanction practices that might lessen the prohibition against them for others (e.g. allowing same sex marriage).

Parents who are successful in instilling their own values in their children are aware that doing so requires constant encouragement of some types of behavior (studying, politeness, responsibility), the involvement of their children in some activities in order to discourage their involvement in others (e.g. playing sports instead of hanging out), and absolute prohibition of some activities, which, although not harmful in themselves, could “lead to” increasing the likelihood of involvement of more harmful activities (e.g. not going to parties where older children or young adults are part of the group).  Most parents  agree with a large body of research, which has shown that the single most important thing a parent can do to guide their child’s behavior in the direction they want is to insure that they socialize with other children who behave in that way themselves. By doing so, parents can make it more likely that their children will develop “good habits,” which will follow the parents’ value system.

So selection of a value system is important in determining behavior. It is the job of religious leaders and social leaders and social critics to advocate for one set of values over another. It would not ordinarily be the job of a philosopher to do so. In my case, my venture into philosophy was simply (although it seemed to go on endlessly) to provide a justification for the claim that choosing a value system made a difference and that such an act of choosing was possible. Now that I have done that, I am prepared to assume the mantle of social critic.

In my search for examples of the perfect way for a writer to express him or herself in English, I took up reading Lawrence Durrell’s Alexandria Quartet for the umpteenth time. In my opinion, Durrell is able to combine nearly perfect word choice and sentence structure with captivating plot and characters. However, many of his characters are believers in arcane religions and philosophies, such as Cabalism, Zoroastrianism, Gnosticism, etc. In particular, I came across a segment within Clea, the last volume of the Quartet, in which a character describes, in a letter read by the narrator, the growing of a set of homunculi in jars, the homunculi being able to communicate with their creator and convey ideas. Eventually the homunculi must be destroyed, which incites a great upheaval of natural elements (smoke and fire). The characters reading the letter, the narrator and Clea, herself and a physician, Balthazar, must decide if they believe the account of the homunculi or not. Clea, being a mystic, does, Balthazar is undecided, and the narrator, as is typical for him, remains non committal.

While enjoying the writing in the Alexandria Quartet, I found myself, after reading this particular account of the letter describing the creation and destruction of the homunculi, irritated, not at the writer or the story or the characters in the story, but at the many people whom I know, who, had they been the characters reading the letter, would have accepted its truth. While they would have admitted that such an account was not possible by any scientific standards of which they were aware, they would have blithely cited the limitations of science (as does Balthazar, incidentally) to explain everything in nature. Or, using a type of reasoning, which is all too prevalent in my opinion, they would have acknowledged the impossibility of the existence of the homunculi according to any scientific or natural laws or observations, but chosen to believe it anyway, because it was appealing to them.

Believing in the existence of homunculi does not constitute a value system but believing that it is permissible to suspend the necessity of using empirically ascertained truth as the basis for one’s beliefs does, I would argue, constitute at least a value, which in turn may influence one’s choice of a value system.             

 



Saturday
May302009

Part 2: Leaping Without Faith

There is no God, no plan for the universe, no plan for my life, no higher meaning that resides in the perspective of an omniscient being, no guarantee that my dreams or hopes will come true or even that I will live tomorrow. I often live my life as if all of these things that are not real are, nevertheless, true. Why is that?  One reason is that people believe in superstitions and even when we know that superstitions are not true, we still act as if they are.

Human beings are constructed so that they repeat what was previously successful. This is a simple rule of behavior, followed by most animals, which was no doubt selected through evolution in preference to other rules, which did not lead to equal reproductive success. If what I do is followed by a positive outcome every once in a while, I may repeat it whenever I can in order to make those every once in a whiles come along more often. If what I do is only coincidentally related to the outcome, I will repeat the behavior as though it caused the outcome. This is called superstitious behavior and the thoughts that accompany the behavior are called superstitions. Superstitious behavior occurs with low probability outcomes of high value - gambling for money, getting hits in baseball, selling houses, overcoming illnesses, having hit records, publishing books, winning wars, etc.

Human beings are plagued by the superstitious mythologies that they make up to try to control events that are often out of their control. This can be counterproductive if the events are, in fact, within their control and they waste time operating within the mythology instead of doing what will make the desired outcome more likely to happen. With most events there is some likelihood of influencing the outcome through human action, although the extent of that influence may be very small.

Guilt and anxiety are unpleasant emotions that are sometimes generated justifiably and sometimes not. Anxiety is the feeling that something bad is going to happen and the desire to reduce the anxiety can be the reason we do something to avoid the bad thing happening. This is a good strategy if, a) it results in reducing the likelihood of the bad thing happening, and/or, b) reducing anxiety has a beneficial effect by itself. Reducing anxiety can have a beneficial effect when high anxiety interferes with good decision making. It can have a negative effect if reducing anxiety results in taking away the motivation to avoid a bad thing happening.

If elements of my lifestyle (smoking, drinking alcohol, not exercising, engaging in unsafe sex) produce anxiety because they raise my fear that I could die, then that's good if it results in me changing my lifestyle to reduce my anxiety (thereby avoiding a risk for premature death) but it is not particularly good if it results in me seeking out testimonials by people who have also engaged in such behaviors but lived a long time as a method of reducing my anxiety (thereby not altering my risk of premature death).

A lot of people's time is spent engaging in behaviors that reduce anxiety but do not alter the likelihood of good or bad things happening. The kind of behaviors I am talking about include praying, reciting to oneself or to others anxiety- reducing, but more or less meaningless, statements (e.g. if it's meant to happen it will; I could live my life following every rule and still get hit by a bus tomorrow, etc.) engaging in efforts to solicit agreement from others (we gain comfort when others agree with our position, even though that may not mean anything about its truthfulness), etc. Needless to say, these kinds of behaviors are counterproductive if they are done instead of doing something that will actually reduce negative consequences or increase positive ones. We continue to do these counterproductive activities because the short-term consequence of reducing our anxiety is more powerful in controlling our behavior than is the long-term consequence of not affecting the outcome of the event.

 

 

 

 

Monday
Feb162009

Wrestling with Descartes's Demon

"Cogito ergo sum," I think, therefore I am could be the slogan for either a supercomputer or a race of androids. This has been a theme for such writers as Philip K. Dick and is the premise for my own thriller, I, Carlos. Descartes' demon, the evil presence that could have fooled him into thinking that everything he perceived was real, when it fact it was not, was, according to Descartes, unable to fool him about the single fact that he was thinking at the time. However Rachel Rosen, in P.K. Dick's Do Androids Dream of Electric Sheep?, or Carlos the Jackal in my novel, I, Carlos, are, in fact, fooled by thinking they are human, when in fact they are machines. If the possibility of such android thinkers removes Descartes' last defense against his demon, what basis, then, do we have for justifying belief in anything?

It is probably wrong to say that the fact that Rachel Rosen or Carlos the Jackal, or Data from Startrek, The Next Generation, for that matter, can think, negates Descartes' assertion that I think therefore I am. In fact, the argument that is made explicitly in I, Carlos is that Descartes was right and a computer program that thinks is a sentient being (given that the quality of the thinking satisfies certain criteria, which we usually ascribe to beings - consciousness being the primary one - that computers can be conscious is another question to be addressed later - in the affirmative).

The problem faced by Descartes can be labeled either a problem of truth or a problem of knowing. The latter problem - how do we know anything - is related to the problem of defining what truth is. Knowing is commonly divided into two categories - immediate experience and knowing about. I can feel wind on my face or I can know what wind on my face feels like, even when I am not feeling it. The first example is often referred to as qualia - immediate sensory or conscious experience - and qualia are usually taken to have a quality of truth to them, in that one can't be mistaken about them. For instance, while I can be standing inside a Hollywood set and feel the breeze from a large fan on my face, my hair whipping in that breeze and my shirt-tail flying, I can be mistaken thinking someone has left the door open and the wind has entered the building and that is what I am feeling, but I can't be mistaken that I am feeling it. Even if I were having an epileptic aura and there was no breeze from either a fan or the wind, I could not be mistaken about feeling as if my face were being brushed by a wind. Similarly, as Descartes pointed out, I cannot be mistaken that I am thinking (though I could be mistaken about what Iam thinking about, if my reflection on my thoughts is inaccurate because ofa poor immediate memory, unconscious distortions, etc.).

So one way to parse the world is into immediate experience and everything else. Immediate experience can also be divided into feeling qualia (e.g the taste of sweetness or the feeling of pain) and thinking qualia (e.g. awareness of my immediate experience, including my own thoughts)  The everything else has many types. We have memories of experiences, mathematico-logical formulations, linguistic formulations, the content of our perceptions, abstract knowledge and deductions (an interesting type of knowledge in that it may be nascent until we begin the process of deducing) and I could go on and on if I had a better imagination (the products of which are another form of knowledge).

If a demon existed - and a figurative demon could simply be the way my mind is constructed - it could fool me about any of the everything else type of knowledge, but not about qualia. That is precisely Descartes' point, although from there he seems to muddle the process of deducing an overriding truth from his insight. Since I am interested in developing a moral philosophy and not an epistemology, I will take a detour from examining every form and type of knowledge one could have, and assert that, in order to get on with the conversation, or any conversation for that matter, it's pointless to question some things. As an example, astronomy, Newtonian physics, mathematics, and my own perception tell me that we live on a planet within a solar system within a larger galaxy, within a larger universe. While a simple examination of the history of science would indicate that this is a fairly recent view of things and equally plausible views have held sway in past centuries (and probably in this century among some people), my present view, if brought up to speed with recent science (e.g. Pluto is probably a swarm of ice and not a planet), is a workable hypothesis. Similarly, solipsism - the view that everything except my own mind does not exist - while difficult to defeat as a logical point, is problematic as a way of interacting with the world and thus not a fruitful road to go down if one is trying to construct a moral philosophy (which is why Descartes had to rebuild his justification for belief in the world after he first tore it down).

My own moral philosophy demands that the basis for belief in many types of knowledge contained in the everything else (things that I don't immediately experience as qualia) be questioned.

Before I launch into a moral philosophy. or at least examining whether there can be a basis for a moral philosophy, I should put some of my cards on the table. My cards are my current assumptions, which do not have the gravitas of foundations or even premises, since I acknowledge that they could be wrong and I reserve the right to change them if I decide they are. First let me say that I have been heavily influenced by three thinkers ( I can't call them philosophers, since one of them detests the label): These are Ludwig Wittgenstein and B. F. Skinner, both of whom taught me to be suspicious of words and to examine how they are used to affect behavior, rather than what they mean in an abstract sense. The last person is Jean Paul Sartre, who made it clear to me that it is possible to doubt the justification for one's actions but nevertheless need to perform them and that honesty requires doing so without removing the doubt. AuthorCasey Dorman | CommentPost a Comment |


Page 1 ... 8 9 10 11 12