Quantcast
Channel: Ethics + Religion – The Conversation
Viewing all 1346 articles
Browse latest View live

Why we love robotic dogs, puppets and dolls

$
0
0
Why are we drawn to tech toys?Ars Electronica, CC BY-NC-ND

There’s a lot of hype around the release of Sony’s latest robotic dog. It’s called “aibo,” and is promoted as using artificial intelligence to respond to people looking at it, talking to it and touching it.

Japanese customers have already bought over 20,000 units, and it is expected to come to the U.S. before the holiday gift-buying season – at a price nearing US$3,000.

Why would anyone pay so much for a robotic dog?

My ongoing research suggests part of the attraction might be explained through humanity’s longstanding connection with various forms of puppets, religious icons, and other figurines, that I collectively call “dolls.”

These dolls, I argue, are embedded deep in our social and religious lives.

Spiritual and social dolls

As part of the process of writing a “spiritual history of dolls,” I’ve returned to that ancient mythology of the Jewish, Christian and Muslim traditions where God formed the first human from the dirt of the earth, and then breathed life into the mud-creature.

Since that time, humans have attempted to do the same – metaphorically, mystically and scientifically – by fashioning raw materials into forms and figures that look like people.

As folklorist Adrienne Mayor explains in a recent study, “Gods and Robots,” such artificial creatures find their ways into the myths of several ancient cultures, in various ways.

Beyond the stories, people have made these figures part of their religious lives in the form of icons of the Virgin Mary and human-shaped votive objects.

In the late 19th century, dolls with a gramophone disc that could recite the Lord’s Prayer were produced on a mass scale. That was considered a playful way of teaching a child to be pious. In the Democratic Republic of Congo, certain spirits are believed to reside in figurines created by humans.

Across time and place, dolls have played a role in human affairs. In South Asia, dolls of various forms become ritually important during the great goddess festival Navaratri. Katsina dolls of the Hopi people allow them to create their own self-identity. And in the famed Javanese and Balinese Wayang – shadow puppet performances– mass audiences learn about a mythical past and its bearing on the present.

Making us human

In the modern Western context, Barbie dolls and G.I. Joes have come to play an important role in children’s development. Barbie has been shown to have a negative impact on girls’ body images, while G.I. Joe has made many boys believe that they are important, powerful and that they can do great things.

Barbie dolls.Tinker Tailor loves Lalka, CC BY-NC

What is at the root of our connection with dolls?

As I have argued in my earlier research, humans share a deep and ancient relationship with ordinary objects. When people create forms, they are participating in the ancient hominid practice of toolmaking. Tools have agricultural, domestic and communication uses, but they also help people think, feel, act and pray.

Dolls are a primary tool that humans have used for the spiritual and social dimensions of their lives.

They come to have a profound influence on humans. They help build religious connections, such as teaching children to pray, serving as a medium for answering prayers, providing protection and prompting healing.

They also model gender roles and teach people how to behave in society.

Tech toys and messages

Aibo and other such technologies, I argue, play a similar role.

Part of aibo’s enchantment is that he appears to see, hear and respond to touch. In other words, the mechanical dog has an embodied intelligence, not unlike humans. One can quickly find videos of people being emotionally captivated by aibo because he has big eyes that “look” back at people, he cocks his head, seeming to hear, and he wags his tail when “petted” the right way.

Another such robot, PARO, a furry, seal-shaped machine that purrs and vibrates as it is stroked, has been shown to have a number of positive effects on elderly people, such as reducing anxiety, increasing social behaviors and counteracting loneliness.

Dolls can have a deep and lasting psychological impact on young people. Psychotherapist Laurel Wider, for example, became concerned about the gendered messages that her son was receiving in social settings about how boys were not supposed to cry or really show many feelings at all.

She then founded a new toy company to create dolls that could help nurture empathy in boys. As Wider says, these dolls are “like a peer, an equal, but also small enough, vulnerable enough, to where a child could also want to take care of him.”

Outsourcing social life?

Not everyone welcomes the influence these dolls have come to have on our lives. Critics of these dolls argue they outsource some of humanity’s most basic social skills. Humans, they argue, need other humans to teach them about gender norms, and provide companionship – not dolls and robots.

MIT’s Sherry Turkle, for example, somewhat famously dissents from the praise given to these mechanical imitations. Turkle has long been working at the human-machine interface. Over the years, she has become more skeptical about the roles we assign these mechanical tools.

When confronted with patients using PARO, she found herself “profoundly depressed” at society’s resort to machines as companions, when humans should be spending more time with other humans.

Teaching us to be humans?

It’s hard to disagree with Turkle’s concerns, but that’s not the point. What I argue is that as humans, we share a deep connection with such dolls. The new wave of dolls and robots are instrumental in motivating further questions about who we are as humans.

Given the technological advances, people are asking whether robots “can have feelings,” “be Jewish” or “make art.”

A question being asked is, can robots have feelings?ellenm1, CC BY-NC

When people attempt to answer these questions, they must first reflect on what it means for humans to have feelings, be Jewish and make art.

Some academics go so far as to argue that humans have always been cyborgs, always a mixture of human biological bodies and technological parts.

As philosophers like Andy Clark have argued, “our tools are not just external props and aids, but they are deep and integral parts of the problem-solving systems we now identify as human intelligence.”

Technologies are not in competition with humans. In fact, technology is the divine breath, the animating, ensouling force of Homo sapiens. And, in my view, dolls are vital technological tools that find their way into devotional lives, workplaces and social spaces.

As we create, we are simultaneously being created.

The Conversation

S. Brent Rodriguez-Plate does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.


As Cuba backs gay marriage, churches oppose the government's plan

$
0
0
As gay Cubans gain more rights, opposition is also growing.AP Photo/Desmond Boylan

Leer en español.

Cubans are debating a constitutional reform that, among other legal changes, would open the door to gay marriage. It would also prohibit discrimination against people based on sex, gender, sexual orientation and gender identity in the communist nation.

The proposed new Constitution, drafted by a special commission within Cuba’s National Assembly, was unveiled in July. If the National Assembly and President Miguel Díaz-Canel approve the document after a Feb. 24, 2019 public referendum, marriage would be defined as a “union between two people.”

Cuba’s 1976 Constitution, known as the Carta Magna, defines marriage as a union between a man and a woman. And it does not fully protect private enterprise, freedom of association or allows for same-sex marriage – despite growing social acceptance and political tolerance for such rights.

Emigrés who retain Cuban nationality have been invited to participate in Cuba’s public debate on the constitutional reform – though not to vote on it – via a digital forum run by the Foreign Ministry – a level of citizen outreach that’s “unprecedented” in Cuba, says Ernesto Soberón, the ministry’s director of consular affairs and Cubans residing overseas.

Cuba’s political process opens up

This lively, broad-based debate is a sign of how much Cuba – a main subject of my research as a professor of literature and cultural studies– has changed in recent years.

President Raúl Castro, who took over for his ailing older brother Fidel in 2006, began to open Cuba’s economy to foreign investment and normalized diplomatic relations with the United States, which has maintained its economic embargo on the Communist island since 1962.

Raúl Castro also worked with President Barack Obama to ease some economic restrictions on Cuba.

Castro stepped down in April 2018, handing power over to the much younger Díaz-Canel.

Cuba has moderately amended its Carta Magna just three times. A 1978 constitutional reform created an official channel for youth political participation, for example, while that of 1992 liberalized elements of Cuba’s socialist economic model to revitalize Cuba’s economy.

Today’s proposed reform is a complete overhaul. It would add 87 articles, change 113 and eliminate 13, even a section of Article 5 affirming Cuba’s “advance toward a Communist society.”

Beyond legalizing gay marriage, the new Constitution would protect private property, limit the presidential term to five years and introduce the role of prime minister.

Intense debate has surrounded the possibility of marriage equality in Cuba, and not just within the government’s official public meetings. Cubans are also discussing and debating gay marriage with neighbors and friends, in the streets and online – a departure from Cuba’s traditionally more top-down style of government.

The rise of gay rights in Cuba

Cuba’s nascent LGBTQ rights movement also began under Raúl Castro, thanks in large part to the leadership of his daughter Mariela Castro, a National Assembly member and president of the semi-governmental Centro Nacional de Educación Sexual, founded in 1987 to advance sexual awareness in Cuba.

A lack of opinion polling makes it difficult to measure Cuban public support for gay marriage. But acceptance of homosexuality, both within the government and in civil society, has grown appreciably.

During the 1960s and 1970s, homosexuality was considered incompatible with Cuba’s model of the revolutionary man: atheist, heterosexual and anti-bourgeoisie. Gay people, active Christians and others who defied these ideals were sent to military work camps to “strengthen” their revolutionary character.

Today, the Cuban government appears to accept homosexuality as part of socialist society. In 2008 the National Assembly approved a law allowing sexual reasignment surgery.

La Habana holds annual marches against homophobia and transphobia and cities across the island celebrate the Gay Pride parade.

The church emerges as an opposition force

But legacies of intolerance remain.

The Assembly of God Pentecostal Church, the Evangelical League and the Methodist Church of Cuba, among other Christian churches, have issued a joint statement opposing gay marriage.

Traditionally, religion has taken a back seat to politics in Cuba.AP Photo/Cristobal Herrera

Their public letter, published on June 8, argues that such “gender ideology” has “nothing whatsoever to do with our culture, our independence struggles nor with the historic leaders of the Revolution.”

Cuba is a secular country where political ideology has historically trumped religion. Religious opposition to a government proposal is rare.

It is even more unusual for the church to attempt to mobilize the Cuban public, as some Christian leaders are trying to do now.

According to the Cuban magazine La Jiribilla, preachers on the streets have been handing out fliers saying gay marriage defies God’s “original design” for the family.

LBGTQ activists answer

Gay rights groups and feminists are responding with a creative show of force.

Clandestina, Cuba’s first online store, and the tattoo studio La Marca are spearheading a campaign called “Cuban design,” celebrating a “very original family” – phrasing that rebuts Christian claims about God’s design.

“More than anything, this is an issue of free expression,” Roberto Ramos Mori, of La Marca, said in an email. “The way to push back against hate is calmly, with intelligence – and, of course, humor.”

Cubans with internet access use the hashtag #mifamiliaesoriginal to signal their support for LGBTQ rights on social media.

The church’s powerful opposition to marriage equality reflects a strategy commonly deployed across Latin America, says the Cuban feminist Ailynn Torres Santana.

Catholic and evangelical groups in Ecuador used similar language, for example, to oppose a 2017 law allowing citizens to choose their own gender identifier, she says. In response to the legislation – which recognized gender as “a binary that is socially and culturally created, patriarchal and heteronormative” – churches called for “citizens to live in harmony with nature.”

Similar scenes played out when both Colombia and Brazil advanced LGBTQ rights, with Christian groups dismissing any attempt to change traditional gender roles as the “result” of what they pejoratively call “gender ideology.”

What’s next for Cuba

Gay marriage is not the only battlefield for Cuba’s newly empowered churches.

Abortion, illegal in most of Latin America, has been a woman’s right in Cuba since 1965. Traditionally, not even Cuba’s Catholic church publicly opposed it.

Recently, though, Christians in Cuba have begun publicly advocating against abortion.

If conservative religious groups manage to prevent gay marriage in Cuba, I believe it would be a setback for social progress on the island.

But the mere existence of alternative voices in Cuba’s public sphere – including that of its churches – is, itself, proof that the country has already changed.

The Conversation

María Isabel Alfonso is co-founder of the not-for-profit group Cuban Americans For Engagement, which works to improve diplomatic relations between Cuba and the United States.

Yom Kippur: A time for feasting as well as fasting

$
0
0
Yom Kippur break fast.danbruell, CC BY-NC-SA

It was the bag of Fritos that gave me away. As a secular Jewish kid whose family did not belong to a synagogue, I did not think twice about riding my bike to the convenience store around the corner during the afternoon of Yom Kippur.

I knew that it was a solemn holiday when observant Jews do not eat or drink. But my public school was closed for the holiday, and there was little to do.

As luck would have it, as I came back around the corner, I nearly ran over a schoolmate who was walking on the sidewalk. I lived in a predominantly Jewish suburb of New York and was conscious that although I wasn’t fasting, he almost certainly was. The bag of corn chips that I was carrying betrayed me as a traitor to my faith.

Years later, as a scholar and author of “Pastrami on Rye: An Overstuffed History of the Jewish Deli,” I came to understand why the Jewish practice of abstaining from food on Yom Kippur is so out of step with the rest of Jewish tradition.

In both its religious and cultural guises, Judaism has always revolved around food.

Eating as a pleasure of life

In ancient times, Jewish priests known as “cohanim” sacrificed bulls, rams and lambs on the altar inside the courtyard of the Temple in Jerusalem, symbolically sharing a banquet with God.

After the Temple in Jerusalem was destroyed in A.D. 70 and Jews were dispersed throughout the Mediterranean basin, food remained a Jewish preoccupation. Because the kosher laws restricted what Jews could put in their mouths, much of every day was spent figuring out what and how to eat.

In 20th-century America, the Jewish delicatessen, with its fatty, garlicky fare, became on par with the synagogue as a communal gathering place.

The worldly emphasis of Judaism has, since ancient times, recognized eating as an essential pleasure of life. A passage in the Jerusalem Talmud states that Jews will be called to account in the afterlife if they have not taken advantage of opportunities to eat well.

Food, according to historian Hasia Diner, “gave meaning to Jewish life.” As the old joke goes, most Jewish holidays can be summed up by a simple formula,

“They tried to kill us. We won. Let’s eat!”

Yom Kippur as a holiday of inversion

But not the Day of Atonement, which is a ritual rehearsal of one’s own death through refusing the demands of the body.

Day of Atonement.Isidor Kaufmann

In Hebrew, Yom Kippur is connected linguistically to Purim, the springtime holiday of masks and merrymaking. But one could well ask: How is the most mournful day of the Jewish year comparable to the most raucous and ribald one?

On Purim, Jews drink alcohol, don disguises and feast on pastries. The element of masquerade, it has been said, makes it the one day of the year when Jews pretend to be other than Jewish.

Not eating on Yom Kippur similarly inverts the normal pattern of Jewish life. It is by abstaining from eating that Jews connect both to God and to their fellow Jews.

A symbol of rebellion?

For secular Jews, there is no better way to rebel against religious Judaism than to dine publicly on Yom Kippur.

In 1888, a group of anarchist Jews in London rented a hall in the city’s East End, where most of the Jews lived, and organized a Yom Kippur Ball with “antireligious lectures, music and refreshments.”

Over the next couple of decades, similar celebrations sprouted up in New York, Philadelphia, Boston, Chicago and Montreal, often triggering protests. Indeed, when Herrick Brothers Restaurant on the Lower East Side of New York decided to remain open on Yom Kippur in 1898, they unwittingly exposed their clientele to violence. Patrons were physically attacked by other Jews on their way to synagogue.

For starving victims of the Nazis, every day was Yom Kippur.

In a famous passage in Holocaust survivor Elie Wiesel’s nonfiction masterpiece, “Night,” the author, who was imprisoned in Auschwitz and Buchenwald, recalls deliberately eating on Yom Kippur as a “symbol of rebellion, of protest against Him,” for His silence and inaction in the face of the Nazi genocide.

“Deep inside me,” he writes, “I felt a great void opening” – not only a physical one, but a spiritual one as well.

A new tradition

Nowadays, most Jews who do not fast on Yom Kippur are simply not part of a community of Jews who participate in synagogue life. Conversely, many non-Jews who are domestic partners of Jews do fast on Yom Kippur.

But whether or not one fasts on Yom Kippur, the tradition has developed over just the last few decades, according to scholar Nora Rubel, of a lavish, festive meal at the conclusion of the fast.

For many Jews, as historian Jenna Weissman Joselit has noted, the break-fast meal is the most important aspect of Yom Kippur, in ways that outshine the religious elements of the day.

Breaking the fast in pop culture

In American popular culture, Jewish characters are often shown breaking the fast – while it is still daylight – with flagrantly non-kosher foods.

In Woody Allen’s 1987 film comedy, “Radio Days,” set in Brooklyn during the Great Depression, a Jewish family is so infuriated that their Communist Jewish next-door neighbor (played by Larry David) is eating and playing music on Yom Kippur that they fantasize about burning down his house. But then the uncle (played by Josh Mostel) goes next door and ends up not only eating pork chops and clams, but being indoctrinated with Marxist ideology to boot.

In a 2015 episode of “Broad City,” Abbi and Ilana down bacon, egg and cheese sandwiches, while in the inaugural episode of the Canadian Internet series “YidLife Crisis,” which debuted in 2014, Yom Kippur finds Chaimie and Leizer in a restaurant consuming poutine – french fries with cheese curds and gravy.

Breaking the fast.

The break-fast meal

In real life, the menu for the break-fast meal typically mirrors that of a Sunday brunch: bagels, cream cheese, smoked fish, noodle kugel (casserole), and rugelach (jam-filled pastries).

However, it may also include dishes from the host’s ethnic Jewish origins. Eastern European Jews traditionally dine on kreplach– dumplings stuffed with calves’ brains or chicken livers, Iraqi Jews drink sweetened almond milk flavored with cardamom and Moroccan Jews enjoy harira– lamb, legume and lemon soup – a dish that was borrowed from Muslim neighbors who were breaking the fast of Ramadan.

Whatever is on the menu, Jews eat with a vengeance to conclude the holiday, restoring them to the fullness of not just their stomachs but of their very Jewish identities.

The Conversation

Ted Merwin does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

How an ancient Islamic holiday became uniquely Caribbean

$
0
0
Hosay procession in St. James.Nicholas Laughlin, CC BY-NC-SA

A throng of Trinidadians line up along the streets of St. James and Cedros to admire the vibrant floats with beautifully bedecked models of mausoleums. Their destination is the waters of the Caribbean, where the crowds will push them out to float.

This is part of the Hosay commemorations, a religious ritual performed by Trinidadian Muslims, that I have observed as part of the research for my forthcoming book on Islam in Latin America and the Caribbean.

What fascinates me is how a practice from India has been transformed into something uniquely Caribbean.

Re-enacting tragedy

During the 10 days of the Islamic month of Muharram, Shiite Muslims around the world remember the martyrdom of Hussein, Prophet Muhammad’s grandson, who was killed in a battle in Karbala, today’s Iraq, some 1,338 years ago. For Shiite Muslims Hussein is the rightful successor to Prophet Muhammad.

Ashura, the 10th day of Muharram, is marked by public mourning and a re-enactment of the tragedy. Shiite Muslims put on passion plays that include inflicting suffering, as a way to remember Hussein. In Iraq, Shiite are known to beat themselves with swords. In India, mourners whip themselves with sharp blades. Some Shiite also visit Hussein’s shrine in Iraq.

Ashura procession in Pakistan.Diariocritico de Venezuela, CC BY

The commemoration has also become a symbol for the broader Shiite struggle for justice as a minority in the global Muslim community.

Early history

In Trinidad, the 100,000 Muslims who make up 5 percent of the island’s total population, celebrate the day of Ashura, as Hosay – the name derived from “Hussein.”

The first Hosay festival was held in 1854, just over a decade after the first Indian Muslims began to arrive from India to work on the island’s sugar plantations.

But Trinidad at the time was under British colonial rule and large public gatherings were not permitted. In 1884, the British authorities issued a prohibition against Hosay commemorations. Approximately 30,000 people took to the streets, in Mon Repos, in the south, to protest against the ordinance. Shots fired to disperse the crowd killed 22 and injured over 100. The ordinance was later overturned.

The “Hosay Massacre” or “Muharram Massacre,” however, lives in people’s memories.

Colorful floats of Trinidad

These days, Hosay celebrations in St. James and Cedros not only recall Hussein, but also those killed during the 1884 Hosay riots. Rather than recreate the events through self-flagellation or other forms of suffering, however, people in Trinidad create bright and beautiful floats, called “tadjahs,” that parade through the streets to the sea.

The tadjah, a colorful model of a mausoleum.Nicholas Laughlin, CC BY-NC-SA

Each tadjah is constructed of wood, paper, bamboo and tinsel. Ranging from a height of 10 to 30 feet, the floats are accompanied by people parading along and others playing drums, just as is the practice in India’s northern city of Lucknow. Meant to reflect the resting place of Shiite martyrs, the tadjahs resemble mausoleums in India. To many, their domes might be a reminder of the Taj Mahal.

Walking ahead of the tadjahs are two men bearing crescent moon shapes, one in red and the other in green. These symbolize the deaths of Hussein and his brother Hassan – the red being Hussein’s blood and the green symbolizing the supposed poisoning of Hassan.

The elaborateness of the tadjahs continues to increase each year and has become somewhat of a status symbol among the families that sponsor them.

A bit carnival, a bit Ashura

Trinidad’s Hosay brings in a more carnival-like joy to a somber remembrance.Nicholas Laughlin, CC BY-NC-SA

While the festival is certainly a somber one in terms of its tribute, it is also a joyous occasion where families celebrate with loud music and don festive attire. This has led some to compare Hosay to Trinidad’s world-famous carnival with its accompanying “joie de vivre.”

But there are also those who believe that the occasion should be a more somber remembrance of the martydom of Hussein. More conservative Muslims in Trinidad have made attempts to “reform” such celebrations. These Muslims believe local customs should be more in line with global commemorations like those in Iraq or India.

What I saw in the festival was an assertion of both the Indian and Trinidadian identity. For Shiite Muslims, who have dealt with oppression and ostracism – both in the past and in the present – it is a means of claiming their space as a minority in Trinidadian culture and resisting being pushed to the margins. At the same time, with its carnival-like feel, the festival could not be more Trinidadian.

Indeed, the celebrations each year illustrate how Indian and Trinidadian rituals and material culture merged to create a unique festival.

The Conversation

Ken Chitwood does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

One big problem with how Jeff and MacKenzie Bezos are spending a small share of their fortune

$
0
0
Jeff and MacKenzie Bezos are becoming bigger donors.Invision and AP/Evan Agostini

Amazon founder Jeff Bezos and his wife, MacKenzie Bezos, recently announced a plan to spend US$2 billion of their $164 billion fortune on homeless shelters and preschools.

Since Jeff Bezos has taken flack for giving away far less of his money than some other billionaires, such as Bill Gates, the announcement may look like a sign that this tech titan is becoming more generous. The announcement also responds to criticism of the $1 billion per year that Bezos already spends on Blue Origin, his space travel experiment.

But as a political theorist who studies the ethics of philanthropy, I think Bezos’s charitable turn raises grave concerns about the pervasive power of business moguls.

Disturbing trend

The Bezos family’s philanthropy is following an unsettling pattern in terms of its timing. Amazon’s market value had recently topped $1 trillion, raising more questions than ever around Amazon’s overwhelming size and power.

This wasn’t the first time that Bezos effectively redirected attention from Amazon’s immense clout with a big announcement about philanthropy. When news broke in 2017 that Amazon was acquiring Whole Foods, raising new concerns about the company’s retail domination, Bezos made a dramatic public appeal through Twitter for advice on how to focus his giving.

The timing may have been coincidental both times, but the suspicion that philanthropy distracts the public from questionable conduct or economic injustice is a familiar worry. Since the days of robber barons like Andrew Carnegie and John D. Rockefeller, social critics have charged that philanthropy is a wolf in sheep’s clothing.

This cynical view holds that magnificent acts of generosity are nothing more than cunning attempts to consolidate power. Like dictators who use “bread and circuses” to pacify the masses, the super-rich give away chunks of their fortunes to shield themselves from public scrutiny and defuse calls for eliminating tax breaks or raising taxes on the wealthiest Americans.

Demonstrators protested against Amazon and Jeff Bezos, Amazon founder and CEO, shortly before he announced plans to make $2 billion in charitable donations.AP Photo/Cliff Owen

Good intentions are not enough

Today, political theorists who study philanthropy – like Emma Saunders-Hastings and Rob Reich– tend to think the problem is more complicated. They accept that many philanthropists are sincere in their desire to help others, and the solutions donors develop are sometimes remarkably innovative.

But they also contend that noble intentions and strategic thinking aren’t enough to make philanthropy legitimate. And my own research reaches a similar verdict.

That’s because massive donations can perpetuate inequality and threaten democracy in several ways.

Dramatic acts of charity by the ultra-wealthy may reduce pressure on governments to tackle poverty and inequality comprehensively. Depending on private benefactors for access to basic necessities can reinforce social hierarchies. And when the elite spend their own money on essential public services like housing the homeless and education for low-income children, it lets the rich mold social policy to their own preferences or even whims.

In other words, even if Bezos has great ideas, no one elected him or hired him to house the homeless and educate kids before they enter kindergarten. Great wealth is not a qualification for all jobs.

Tax privilege

The tax deductibility of the donations made by the richest Americans can exacerbate these concerns because it effectively subsidizes their giving. Some scholars argue that the point of tax incentives is to encourage donations for things the government can’t or shouldn’t support directly – like maintaining a church property.

Observers, including MarketWatch reporter Kari Paul and Guardian columnist Marina Hyde, have noted that if people like Bezos and the businesses they lead were to stop fighting for low tax rates, democratically elected officials would have more money to spend tackling big problems like homelessness and other urgent priorities.

By making tax-deductible donations, they argue, Bezos is effectively diverting tax dollars to fuel his private judgments about public policy.

Questions about accountability and generosity

My research indicates that using tax deductions to supply essential public services, such as education and housing assistance, may be a misuse of this privilege because it has the potential to undermine democratic control.

Members of the public have a vital interest in being able to oversee the provision of goods and services that support their most basic needs. This kind of accountability is possible only when these needs are served by democratic governments, not rich benefactors operating in their place.

And Bezos’s behavior as a businessman has raised other questions about his generosity and respect for democracy. When Amazon’s hometown of Seattle proposed to tackle runaway housing costs with a tax on the city’s largest employers, Amazon resisted. The city backed off after the company threatened to scale down its Seattle operations if the bill passed.

It may seem odd that someone who opposed a tax intended to help cover housing costs for his low-income neighbors would want to spend part of his fortune on housing. But to me it makes sense, because in my view, Jeff Bezos’s beef isn’t with his duties to help the least fortunate, but with the limits on economic power that democracy requires.

The Conversation

Ted Lechterman does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Thirty years on, why 'The Satanic Verses' remains so controversial

$
0
0

One of the most controversial books in recent literary history, Salman Rushdie’s“The Satanic Verses,” was published three decades ago this month and almost immediately set off angry demonstrations all over the world, some of them violent.

A year later, in 1989, Iran’s supreme leader, the Ayatollah Khomeini, issued a fatwa, or religious ruling, ordering Muslims to kill the author. Born in India to a Muslim family, but by then a British citizen living in the U.K., Rushdie was forced to go into protective hiding for the greater part of a decade.

Angry demonstrators protest against the book in 1989.Robert Croma, CC BY-NC-SA

What was – and still is – behind this outrage?

The controversy

The book, “Satanic Verses,” goes to the heart of Muslim religious beliefs when Rushdie, in dream sequences, challenges and sometimes seems to mock some of its most sensitive tenets.

Muslims believe that the Prophet Muhammed was visited by the angel Gibreel – Gabriel in English – who, over a 22 year period, recited God’s words to him. In turn, Muhammed repeated the words to his followers. These words were eventually written down and became the verses and chapters of the Quran.

Rushdie’s novel takes up these core beliefs. One of the main characters, Gibreel Farishta, has a series of dreams in which he becomes his namesake, the angel Gibreel. In these dreams, Gibreel encounters another central character in ways that echo Islam’s traditional account of the angel’s encounters with Muhammed.

Rushdie chooses a provocative name for Muhammed. The novel’s version of the Prophet is called Mahound – an alternative name for Muhammed sometimes used during the Middle Ages by Christians who considered him a devil.

In addition, Rushdie’s Mahound puts his own words into the angel Gibreel’s mouth and delivers edicts to his followers that conveniently bolster his self-serving purposes. Even though, in the book, Mahound’s fictional scribe, Salman the Persian, rejects the authenticity of his master’s recitations, he records them as if they were God’s.

British author Salman Rushdie.Fronteiras do Pensamento, CC BY-SA

In Rushdie’s book, Salman, for example, attributes certain actual passages in the Quran that place men “in charge of women” and give men the right to strike wives from whom they “fear arrogance,” to Mahound’s sexist views.

Through Mahound, Rushdie appears to cast doubt on the divine nature of the Quran.

Challenging religious texts?

For many Muslims, Rushdie, in his fictional retelling of the birth of Islam’s key events, implies that, rather than God, the Prophet Muhammed is himself the source of revealed truths.

In Rushdie’s defense, some scholars have argued that his “irreverent mockery” is intended to explore whether it is possible to separate fact from fiction. Literature expert Greg Rubinson points out that Gibreel is unable to decide what is real and what is a dream.

Since the publication of “The Satanic Verses,” Rushdie has argued that religious texts should be open to challenge. “Why can’t we debate Islam?” Rushdie said in a 2015 interview. “It is possible to respect individuals, to protect them from intolerance, while being skeptical about their ideas, even criticising them ferociously.”

This view, however, clashes with the view of those for whom the Quran is the literal word of God.

After Khomeini’s death, Iran’s government announced in 1998 that it would not carry out his fatwa or encourage others to do so. Rushdie now lives in the United States and makes regular public appearances.

Still, 30 years later, threats against his life persist. Although mass protests have stopped, the themes and questions raised in his novel remain hotly debated.

The Conversation

Myriam Renaud is affiliated with the Parliament of the World's Religions.

Is it immoral to watch football?

$
0
0
What ethical issues should you consider when watching football?Chris Brooks/flickr.com, CC BY-ND

For a large swath of Americans, fall means football. But, as in previous years, this season’s football has been mired in controversy.

Most notable of these has been the Colin Kaepernick case. Kaepernick has accused the NFL of colluding to keep him off the field because of his protests against police brutality and racial inequality during the playing of the national anthem. A recent ruling has granted him a full hearing in the dispute.

And this hasn’t been the only controversy. Scientific findings have shown that regular practice of football increases the risk of brain diseases. Allegations regarding the intrinsic violent nature of the game and an increasing commercialization of the sport have been the subject of recent headlines as well.

For fans who consider the sport from an ethical perspective, all these issues raise a question: Is watching football morally problematic?

Football injuries

At its core, football demands skill and tactical acumen. Indeed, as philosopher Alexis C. Michalossaid more than four decades ago,

“There’s something admirable about the performance of an excellent running back, a scrambling quarterback or a defensive player with the knack of being in the right place at the right time. Anyone who has tried to match such performances must admire them.”

However, in the way it is currently practiced, football is seriously dangerous for players.

Repetitive brain trauma makes football players highly vulnerable to chronic traumatic encephalopathy, a neurogenerative disease. A 2017 study found that 99 percent of deceased NFL players who had donated their brains to scientific research suffered from this disease.

The risk of injuries for football players is comparatively higher.Melissa Doroquez/Flickr.com, CC BY-SA

In addition, football players suffer the most injuries among athletes. A study of the injury rates among high school student-athletes estimated that the injury rate for football was twice that of soccer or basketball.

Culture of violence?

In his blistering 1991 poem “American Football,” British writer Harold Pinter, winner of the 2005 Nobel Prize in literature, depicts the sport as “deliberately” violent. Aimed at satirizing the violent character of the Gulf War, Pinter portrays war and football as being intimately connected.

As scholars who study the ethics of sport, we would argue that while football does require the use of bodily force, it is not that football is inherently violent. Sport philosopher Jim Parry, for example, contests this claim by defining violence as involving “intentional hurt or injury to others.”

It is not inherent violence but a culture of violence around the sport that is troubling.

Nate Jackson, a former football player, describes in his 2013 memoir, “Slow Getting Up,” that for most of his colleagues, the main rewards of the sport relate to violence. For instance, one of the main lessons players must learn to be successful is “decide what you’re going to do and do it violently.”

Similarly, Don DeLillo compellingly captured the rhetoric and ethos of violence surrounding football in his 1972 novel “End Zone.” Gary, the book’s running-back narrator, describes football in militaristic language that resembles warfare.

Furthermore, far from being ideologically neutral, some commentators argue football appeals to conservative values. Registered Republicans have been found more likely to be NFL fans than registered Democrats. Perhaps this could explain President Donald Trump’s denunciation of players who decided not to stand for the pregame national anthem.

More about money?

As for its commercialization, consider the following: In the last decade, the NFL has raked in billions in lucrative broadcasting rights deals. Verizon paid over US$2 billion for five years for the right to stream NFL games across its digital platforms.

It is true, as philosopher Alasdair MacIntyre contends, social practices need institutions to flourish. In turn, institutions require financial resources to accomplish that goal. The problem, however, comes when institutions pursue those resources at the expense of the very virtues and values that define those practices.

In the case of football, it could be argued that the form and skills that make it appealing are now a model for revenue generation. In doing so, its inherent virtues and values have been deemphasized, in favor of market values.

As Michael Oriard, a former football player and historian, contends, the story of NFL football “is necessarily about money, lots of money. Professional football has always been about money.” The commercial aspect has become even more prominent as a result of its commodification as a television product.

These days the litany of television commercial breaks has not only negatively impacted the length and pace of games but also driven fans’ attention away from football. Indeed, NFL Commissioner Roger Goodell admitted that the league worried about the impact of commercials in the flow and pace of the game.

What are the ethics?

Football is an important part of America’s shared culture.sunshine.patchoulli/Flickr.com, CC BY-NC-ND

Historians point out that the Super Bowl is America’s largest shared cultural experience. It could be argued that football fans learn to speak and shape their national identity by, among other things, engaging in the sport. Football, in other words, embodies and reveals the main values of the culture, playing a key role in shaping the way in which Americans imagine their common national identity.

Considering all the morally problematic aspects surrounding football, it is worth asking: Is this the kind of social practice around which Americans should imagine and build their national identity?

Editor’s note: This piece is part of our series on ethical questions arising from everyday life. We would welcome your suggestions. Please email us at ethical.questions@theconversation.com.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

The Catholic Church resists change – but Vatican II shows it's possible

$
0
0

Pope Francis has asked the heads of every bishops’ conference around the world to gather for a summit in February to discuss the issue of sexual abuse in the church.

Even as the pope takes these steps, debates continue about what he knew and whether there was a better way of dealing with the perpetrators of abuse. There have been many specific recommendations about what Francis could now do to fix the mess the church faces. These include everything from reforming canon lawto elevating nuns to the position of Cardinal.

Many of these discussions acknowledge that bringing any real change in the Roman Catholic Church is hard, even for a pope. However, as a scholar of religious change, I believe what’s missing in these discussions are examples when significant change was achieved in the church.

The pope’s options

So what exactly can a pope do to change things? First of all, of course, the pope can make many administrative changes. For example, he can modify canon law or the rules that govern the behavior of all priests and members of the hierarchy. He can “reorganize” different Vatican offices, such as the conservative Roman Curia and issue encyclicals to set the tone and tenor of the church.

Pope Francis, for example, is well-known for decisions that emphasize simplicity and modesty. For example, after taking office he wore ordinary black shoes to office and chose to ride in an everyday car like a Fiat.

But the issue is that the next pope could reverse some or all of these changes. For a change to be permanent, the pope needs to exercise his right to speak infallibly– meaning that what he is saying can never be wrong, and in essence, cannot change.

Speaking infallibly is an incredible burden, in no small part because a pope must do it alone. It has only been done once since papal infallibility was officially declared by the First Vatican Council in 1898. That was in 1950, when Pius XII declared the doctrine of the Assumption of Mary, that she was bodily assumed into heaven upon her death.

What about a council?

But there is a way for a pope to speak authoritatively and with great legitimacy about doctrine: he can call an ecumenical council. In fact, only a pope can call a council, and he does not have to do so in concert with anyone else.

An ecumenical council, by definition, means a gathering of all of the leaders of the world church.

Despite the fact that technically, any pope can call a council anytime, councils are rare events – occurring less than once a century on average. The church’s last council before Vatican II, Vatican I, ended prematurely in 1869 as a result of the Franco-Prussian War and did little of note besides declaring the doctrine of papal infallibility.

Prior to Vatican I, the church had not held a council since the Council of Trent in 1563.

The most recent, and most important, example of change occurring in the Catholic Church is the Second Vatican Council, or Vatican II. In 1998 I started research on Vatican II and gained access to everything from council leaders’ personal correspondence to votes from the Vatican Secret Archive.

I believe, this is a moment to revisit Vatican II and examine what it can tell us about how the church can, and did, renew itself.

Vatican II

When Pope John XXIII called the council in 1958, the world was surprised as were, by all accounts, the Vatican bureaucracy. The council created a “political opportunity” in the church for those who wanted to bring change.

Vatican II was a monumental task. It took four years of preparations and four sessions of debates over three years, between 1962 to 1965.

Cardinals of the Roman Catholic Church during the closing ceremony of a session of the Second Vatican Council in 1963.AP Photo

Almost 3000 bishops, cardinals, heads of religious orders and theologians from all over the world participated in the council. What is noteworthy is that these participants were not members of the Curia, the administrative offices of the Vatican that oversee the day-to-day workings of the church.

These were people usually focused on administering to their local dioceses. But when the opportunity came to change the church, they took it.

The hurdles

The process was difficult and full of set backs. During council preparations and over the course of council itself, the Curia tried to prevent changes. Indeed, deeming the council complete was in and of itself, an ongoing, uncertain and often fraught process.

The initial drafts of statements about church doctrine that the Curia prepared before the council, did nothing other than enumerating errors and reiterating current church doctrine. These, however, were rejected in a dramatic confrontation during the first days of the council.

As I demonstrate in my book on Vatican II, such progressive victories were a result of the efforts of a group of bishops who believed in the “doctrine of collegiality.” Approved at Vatican II, this doctrine states that the bishops convening together have the same authority to discuss, debate or change doctrine as the pope. These bishops listened to each other and, most importantly, developed compromise positions that the majority of bishops could support.

Thus, for example, bishops who carried historic animosities toward Protestant missionaries learned how important it was to improve those relationships. My analysis of council votes that I obtained from the Vatican Secret Archive demonstrated that ultimately, a majority of Latin American bishops voted for reforms that helped in better relationships with Protestants.

As a result of many other such dialogues, real changes came about.

The changes from Vatican II

Among the noteworthy ones were those that changed the way the church worshipped. The altar, for example, was turned around to face the people. Mass was changed to be in the vernacular, no longer in Latin. And women no longer had to cover their hair in church.

And these are but the most practical.

Many of the bigger doctrinal changes were those that most Catholics were oblivious to, or knew about only in passing. The biggest of these was the Declaration of Religious Liberty.

By declaring that the only just form of government was one under which people were free to worship as they pleased, the church relinquished centuries-old preferential treatment for particular governments. Prior to the declaration, the church had benefited from governments that either repressed other religious organizations, or otherwise provided financial or legal support for the Catholic Church.

Pope Paul VI delivering an address to the United Nations General Assembly in New York, in 1965.AP Photo

In doing so the church gained more than it lost. Most of all, it gained legitimacy throughout the globe. Just one indication of this was that during the first papal visit to the United States in 1965, Pope Paul VI was invited to speak at the United Nations.

Time for Vatican III?

When Pope John XXIII announced the council in 1958, there was no real crisis in the church. It was, by many measures, a healthy, if ancient institution.

But today, the Catholic Church is facing a crisis: In many places of the world, mass attendance is down and a growing number of young Catholics are leaving the church.

In addition to these challenges, fewer and fewer men are willing to enter the priesthood. This trend, which began long before the clergy sex abuse scandal, is raising questions around whether the church needs to reconsider its insistence on a male,celibate priesthood.

And, of course, there are many other concerns that the church might want to engage with – for example, whether the 98 percent of practicing Catholics who use “artificial means” of contraception – meaning anything other than the rhythm method – are sinners.

It seems possible to me that given the depth and breadth of the issues it is facing, the church needs more than reflection. The church, I would argue, needs change. It needs another council.

The Conversation

Melissa Wilde, Ph.D. has received or receives funding from the National Science Foundation (Dissertation Improvement Grant (SES-0002409), the Woodrow Wilson Foundation (via a Charlotte W. Newcombe Dissertation Writing Year Fellowship, the American Sociological Association’s Fund for the Advancement of the Discipline, the Society for the Scientific Study of Religion, and research funds and a junior sabbatical from Indiana University, the University of Pennsylvania, and Penn's Program for Research on Religion and Urban Civil Society.


How should we judge people for their past moral failings?

$
0
0
The #MeToo movement and more recent allegations against Brett Kavanaugh have posed questions about past conduct.AP Photo/Damian Dovarganes, File

The recent allegations of sexual assault against Supreme Court nominee Brett Kavanaugh have further divided the nation. Among the questions the case raises are some important ethical ones.

Not least among them is the question of moral responsibility for actions long since passed. Particularly in light of the #MeToo movement, which has frequently involved the unearthing of decades old wrongdoing, this question has become a pressing one.

As a philosopher, I believe this ethical conundrum involves two issues: one, the question of moral responsibility for an action at the time it occurred. And two, moral responsibility in the present time, for actions of the past. Mostphilosophersseemtothink that the two cannot be separated. In other words, moral responsibility for an action, once committed, is set in stone.

I argue that there are reasons to think that moral responsibility can actually change over time – but only under certain conditions.

Locke on personal identity

Portrait of John Locke.Skara kommun/Flickr.com, CC BY

There is an implicit agreement among philosophers that moral responsibility can’t change over time because they think it is a matter of one’s “personal identity.” The 17th-century British philosopher John Locke was the first to explicitly raise this question. He asked: What makes an individual at one time the very same person as an individual at another time? Is this because both share the same soul, or the same body, or is it something else?

Not only is this, as philosopher Carsten Korfmacher notes, “literally a question of life and death,” but Locke also thought that personal identity was the key to moral responsibility over time. As he wrote,

“Personal identity is the basis for all the right and justice of reward and punishment.”

Locke believed that individuals deserve blame for a crime committed in the past simply because they are the same person that committed the past crime. From this perspective, Kavanaugh the 53-year-old would be responsible for any of the alleged actions that he committed as a young adult.

Problems with Locke’s view

Locke argued that being the same person over time was not a matter of having the same soul or having the same body. It was instead a matter of having the same consciousness over time, which he analyzed in terms of memory.

Thus, in Locke’s view, individuals are responsible for a past wrong act so long as they can remember committing it.

While there is clearly something appealing about the idea that memory ties us to the past, it is hard to believe that a person could get off the hook just by forgetting a criminal act. Indeed, some research suggests that violent crime actually induces memory loss.

But the problems with Locke’s view run deeper than this. The chief one is that it doesn’t take into consideration other changes in one’s psychological makeup. For example, many of us are inclined to think that the remorseful don’t deserve as much blame for their past wrongs as those who express no regret. But if Locke’s view were true, then remorse wouldn’t be relevant.

The remorseful would still deserve just as much blame for their past crimes because they remain identical with their former selves.

Responsibility and change

Of late, some philosophers are beginning to question the assumption that responsibility for actions in the past is just a question of personal identity. David Shoemaker, for example, argues that responsibility doesn’t require identity.

In a forthcoming paper in the Journal of the American Philosophical Association, my coauthor Benjamin Matheson and I argue that the fact that one has committed a wrong action in the past isn’t enough to guarantee responsibility in the present. Instead, this depends on whether or not the person has changed in morally important ways.

Philosophers generally agree that people deserve blame for an action only if the action was performed with a certain state of mind: say, an intention to knowingly commit a crime.

My coauthor and I argue that deserving blame in the present for an action in the past depends on whether those same states of mind persist in that person. For example, does the person still have the beliefs, intentions and personality traits that led to the past act in the first place?

If so, then the person hasn’t changed in relevant ways and will continue to deserve blame for the past action. But a person who has changed may not be deserving of blame over time. The reformed murderer Red, played by Morgan Freeman, in the 1994 film, “The Shawshank Redemption,” is one of my favorite examples. After decades in the Shawshank Penitentiary, Red the old man hardly resembles the teenager that committed the murder.

If this is right, then figuring out whether a person deserves blame for a past action is more complex than simply determining if that individual did, in fact, commit the past action.

Brett Kavanaugh giving his opening statement before the Senate Judiciary Committee.Saul Loeb/Pool Image via AP

In the case of Brett Kavanaugh, some commentators have, in effect, argued that his recent Senate testimony displayed the persisting character of an “aggressive, entitled teen,” although there are those who disagree.

What I argue is that when confronted with the issue of moral responsibility for actions long since passed, we need to not only consider the nature of the past transgression but also how far and how deeply the individual has changed.

The Conversation

Andrew Khoury does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

How the loss of Native American languages affects our understanding of the natural world

$
0
0
Dance is a unique way of passing on cultural stories to a younger generation.Aaron Hawkins/Flickr.com, CC BY-ND

Alaska has a “linguistic emergency,” according to the Alaskan Gov. Bill Walker. A report warned earlier this year that all of the state’s 20 Native American languages might cease to exist by the end of this century, if the state did not act.

American policies, particularly in the six decades between the 1870s and 1930s, suppressed Native American languages and culture. It was only after years of activism by indigenous leaders that the Native American Languages Act was passed in 1990, which allowed for the preservation and protection of indigenous languages. Nonetheless, many Native American languages have been on the verge of extinction for the past many years.

Languages carry deep cultural knowledge and insights. So, what does the loss of these languages mean in terms of our understanding of the world.

Environmental knowledge

Embedded in indigenous languages, in particular, is knowledge about ecosystems, conservation methods, plant life, animal behavior and many other aspects of the natural world.

The shell necklace of Queen Liliʻuokalani.David Eickhoff/Flickr.com, CC BY

In Hawaiian traditions and belief systems, for example, the tree snails were connected to “the realm of the gods.” Hawaiian royalty revered them, which protected them from overharvesting.

The Bishop Museum in Honolulu holds a shell necklace, or lei, of Queen Lili‘uokalani, the last monarch of the Kingdom of Hawaii. It is made from tree snail shells, which signifies the high rank of female royalty. Wearing a shell was believed to provide “mana,” or spiritual power and a way to understand ancestral knowledge.

Many of these snails are now extinct and those remaining are threatened with extinction. Scientists are working with Hawaiian language experts to learn about the belief systems that once helped protect them and their habitats.

A tool for doctors

Words in indigenous languages can have cultural meanings, that can be lost during translation. Understanding the subtle differences can often shift one’s perspective about how indigenous people thought about the natural world.

For example, as an indigenous scholar of the environment, I led a team some years ago of language experts, elders and scholars from Montana and Alberta, Canada, to create a list of Blackfeet words, called a lexicon, of museum objects. The elders I worked with noted that the English word “herb,” which was used to describe most plant specimens within museums, did not have the same meaning in Blackfeet.

In English, the word “herb” can have numerous meanings, including a seasoning for food. The closest English word to herb in Blackfeet is “aapíínima’tsis.” The elders explained this word means “a tool that doctors use.”

The hope is that the lexicon and audio files recorded in the Blackfeet language that our research helped create, might assist future scholars access the embedded meanings in languages.

Blackfeet word for face paint.

Saving vanishing languages

Many Native American communities in the United States are now working to save these cultural insights and revitalize their languages.

In Wisconsin, an Ojibwe language school called “Waadookodaading,” translated literally as “a place where people help each other,” immerses its students in the environmental knowledge embedded in the language.

The Ojibwe believe that theirs is a language of action. And the best way for children to learn is by doing and observing the natural world. Each spring, for example, the students go into the woods to gather maple sap from trees, which is processed into maple syrup and sugar. These students learn about indigenous knowledge of plants, their habitats and uses.

Students from Waadookodaading School making maple syrup.

Language loss can be considered as extreme as the extinction of a plant or an animal. Once a language is gone, the traditional knowledge it carries also gets erased from society.

Efforts are now underway worldwide to remind people of this reality. The United Nations has designated 2019 as the “International Year of Indigenous Languages” in order to raise awareness of indigenous languages as holders of “complex systems of knowledge” and encourage nation states to work toward their revitalization.

The loss of indigenous languages is not Alaska’s concern alone. It affects all of us.

The Conversation

Rosalyn R. LaPier does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

The Catholic Church's grim history of ignoring priestly pedophilia – and silencing would-be whistleblowers

$
0
0

Widespread public shock followed the recent release of the Pennsylvania grand jury report that identified more than 1,000 child victims of clergy sexual abuse. In fact, as I know through my research, the Vatican and its American bishops have known about the problem of priestly pedophilia since at least the 1950s. And the Church has consistently silenced would-be whistleblowers from within its own ranks.

In the memory of many Americans, the only comparable scandal was in Massachusetts, where, in 2002, the Boston Globe published more than 600 articles about abuses under the administration of Cardinal Bernard Law. That investigation was immortalized in the 2015 award-winning film, “Spotlight.”

What many Americans don’t remember, however, are other similar scandals, some even more dramatic and national in scope.

Doubling down on secrecy

The Vatican has known about priestly pedophilia for many decades.AP Photo/Andrew Medichini

While the problem of priestly pedophilia might be centuries old, the modern paper trail began only after World War II, when “treatment centers” appeared for rehabilitating abusive priests. Instead of increased transparency, bishops, at the same time, developed methods for denying and hiding allegations of child sexual abuse.

During the 1950s and 1960s, bishops from around the U.S. began referring abusive priests to church-run medical centers, so that they could receive evaluation and care without disclosing their crimes to independent clinicians.

Fr. Gerald Fitzgerald, who began his ministry in Boston and Quebec, was among those who advocated prayer over medicine. In 1947, Fitzgerald moved to New Mexico and founded the Servants of the Paraclete, a new order of Catholic priests devoted to healing deviant clergy. His belief in faith healing reflected a vocal minority of Catholic leaders who still viewed psychology as a threat to Christian faith.

Fitzgerald based the Paracletes in New Mexico. From 1947 to 1995, the state became a dumping ground for pedophile priests. As Kathleen Holscher, chair of Roman Catholic studies at the University of New Mexico, has observed, this practice forced New Mexico’s parishes to absorb, in effect, abusive priests from across the country.

Other priests sent to the Paracletes were returned back into ministry in their home diocese, reassigned to new parishes that had no way of knowing about their abusive past.

This system was sustained, in part, by the fact that few diocesan personnel files recorded past accusations by children and parents. As Richard Sipe, a psychologist who worked at a similar Catholic treatment center later revealed, bishops generally masked past accusations by instead recording code words like “tickling,” “wrestling” or “entangled friendship” in personnel files.

By 1956, Fitzgerald became convinced that pedophilia could not be treated, even as he continued to believe that prayers could cure other illnesses, such as alcoholism. He petitioned U.S. bishops to stop sending him their child abusers, advocating instead for firing abusive priests and permanently removing them from ministry.

Fitzgerald eventually appealed directly to the Vatican, and met with Pope Paul VI to discuss the problem in 1963.

Hush money

It is unclear when the Church began using hush settlements to silence victims. The practice, however, was so widespread by the 1980s that the Vatican assigned church lawyers to adjust their insurance policies in order to minimize additional liabilities.

These included Fr. Thomas Doyle, a nonparish priest who specialized in Roman Catholicism’s internal laws; Fr. Michael Petersen, a trained psychiatrist who believed that priests with abusive disorders should be treated medically; and Roy Mouton, a civil attorney who represented one of the church’s most notorious pedophile priests.

Together, they authored a 92-page report and submitted it for presentation at the 1985 meeting of the National Conference of Catholic Bishops, the Church’s apparatus for controlling and governing American priests.

The document estimated that American bishops should plan to be sued for at least US$1 billion, and up to $10 billion, over the following decades.

Several of the nation’s most powerful cardinals buried the report.

In response, Doyle mailed all 92 pages, along with an executive summary, to every diocese in the United States. Yet there is no evidence that any bishops headed the report’s warnings.

1992: The nation’s first scandal

During the 1980s, victims began to speak out against the church’s systemic attempts to mask the scope of the crisis. In 1984, survivors of Fr. Gilbert Gauthe refused to be silenced by hush money, instead choosing the painful path of initiating public lawsuits in Louisiana. Gauthe ultimately confessed to abusing 37 children.

Representative of SNAP, Survivors’ Network for Those Abused by Priests, talk to the media during a press conference in Rome, in 2010.AP Photo/Pier Paolo Cito

As these stories became public, more and more victims began to bring lawsuits against the Church. In Chicago, the nation’s first two clergy abuse survivor organizations, Victims of Clergy Sexual Abuse Linkup (LINKUP) and the Survivors’ Network for Those Abused by Priests (SNAP), were created in 1987.

In 1992, survivor Frank Fitzpatrick’s public allegations led to revelations that Fr. James Porter had abused more than 100 other children in Massachusetts. Widespread shock followed at the time as well as after Fitzpatrick’s appearance on ABC’s “Primetime Live,” when news anchor Diane Sawyer interviewed Fitzpatrick and 30 other Porter victims.

The national outcry forced dioceses across the country to create public standards for how they were handling abuse accusations, and American bishops launched new marketing campaigns to regain trust.

In spite of internal pledges to reform their culture of covering up abuses, the Pennsylvania grand jury report demonstrates that the Church’s de facto policy remains unchanged since the 1950s: Instead of reporting rape and sexual abuse to secular authorities, bishops instead continue to transfer predatory priests from one unsuspecting parish to the next.

Victims with no hope of justice

The issue of clergy sex abuse has also unleashed broader questions about justice and faith: Can courtrooms repair souls? How do survivors continue to pray and attend Mass?

As a scholar who studies communities of clergy abuse victims, I have asked Catholics to share their thoughts about the current crisis. Many of them tell me that “at least” Boston’s Cardinal Law “went to jail.” That leads to an awkward moment when I have to refresh their memory.

Cardinal Law was neither indicted nor arrested. Instead, Pope John Paul II transferred Law to run one of the Vatican’s most cherished properties, the Basilica of Saint Mary, essentially rewarding Law for his deft cover-up of the abuses in Boston.

Victims of clergy sexual abuse or their family members react after the release of the report by Pennsylvania grand jury.AP Photo/Matt Rourke

In fact, no American bishops or cardinals have ever been jailed for their role in covering up and enabling child sexual abuse. Civil settlements have held the Church accountable only financially. A combination of political complacency and expired statutes of limitations has prevented most survivors from obtaining real justice.

Outraged by this lack of justice, survivors urged the International Criminal Court at The Hague to investigate the Vatican for crimes against humanity. The International Criminal Court declined, citing the fact that many of the alleged crimes occurred before the court was formed, and were thus beyond the scope of the court’s “temporal jurisdiction.”

To date, the highest-ranking priest tried in an American court is Philadelphia’s Monsignor William Lynn, who was charged with conspiracy and two counts of endangering children. His 2012 conviction for one count of endangerment was vacated by the Pennsylvania Supreme Court in 2016. He now awaits an unscheduled retrial.

Even as scholars and theologians have called for all of the American bishops to resign, there has been little talk of criminal prosecutions. If yesterday’s survivors do not find justice, tomorrow’s children will not know safety.

As the Pennsylvania grand jury emphasized:

“There have been other reports about child sexual abuse within the Catholic church… For many of us, those earlier stories happened someplace else, someplace away. Now we know the truth: it happened everywhere.”

The Conversation

Brian Clites does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Why a large church group had little impact when it opposed Kavanaugh's nomination

$
0
0
Justice Anthony Kennedy swears in Supreme Court Justice Brett Kavanaugh.AP Photo/Susan Walsh

Numerous organizations demanded Brett Kavanaugh’s nomination to the U.S. Supreme Court be put on hold or withdrawn, in the wake of the sexual misconduct allegations against him. The most surprising one, perhaps, was from the National Council of Churches, which represents 38 Christian denominations and typically avoids commenting on partisan issues such as court nominations.

In this case, the National Council of Churches said in a statement that Kavanaugh had “disqualified himself” by revealing he had “neither the temperament nor the character” for the Supreme Court.

To some observers, this stance suggested a new vibrancy on the part of the religious left. The National Council of Churches has long been the voice of progressive religion, and its decision to critique Kavanaugh could well have reminded many of the influence the organization once had in American politics.

Indeed, in the past, due to their large memberships and close connections to political power, these organizations were influential. But that’s missing today.

Coordinated Christian influence

As a historian who has written on the founding of the council’s precursor organization, the Federal Council of Churches of Christ in America, I am skeptical that the Kavanaugh statement matches the group’s earlier influence.

The Federal Council of Churches of Christ in America was set up in 1908 so that churches could better respond to social woes such as widespread child labor and rampant poverty.

The group’s founders believed that by combining efforts, the nation’s major Protestant denominations could, in their words, “secure a larger combined influence for the churches.” By working together, churches could more effectively coordinate charitable work. More importantly, they could craft a common message on political issues.

The plan worked. As historian Christopher Evans has argued, politicians came to view the Federal Council as “the ‘voice’ of American Protestantism.”

The young organization’s influence became especially clear when the U.S. entered World War I in 1917. Representatives of the Federal Council met frequently with Newton Baker, the secretary of war. Baker gave the organization significant oversight in shaping the U.S. military’s policies on religion.

The new chaplaincy corps, as well as the religious policies of the Army and Navy, came to be largely guided by the Federal Council. And Federal Council leaders used their clout to imbue this new institution with their moral values.

Linking religion and progressive politics

After World War I, the Federal Council grew as a force of progressive activism. The organization’s political outlook and its influence were exemplified by Francis McConnell.

President Franklin D. Roosevelt speaks before the Federal Council of Churches of Christ in 1933.AP Photo

The son of an evangelical pastor in rural Ohio, McConnell himself became a Methodist clergyman. He rejected his father’s conservatism and embraced a liberal religious and political outlook. He was an outspoken advocate for workers’ rights and the primary author of an influential study of the 1919 steel strike, which called for greater government support for laborers and their unions.

As Susan Curtis, a scholar of McConnell, has written, he viewed politics as “an avenue by which religious people could reach their goal of a just society.” This was precisely the attitude of the Federal Council’s founders.

McConnell became Federal Council president in 1928. He brought important political connections to the role, including to Franklin D. Roosevelt, future president, who was then governor of New York. McConnell went on to lead the state’s Social Security Administration. His commitment to social welfare programs reflected the council’s’s longstanding advocacy for the poor.

The Federal Council merged with several smaller groups to become the National Council of Churches in 1950, but its commitment to exerting a progressive religious influence in U.S. politics persisted. The organization was a major supporter of civil rights campaigns of the 1950s and ‘60s.

Its leaders developed educational programs to build support for the movement among whites, and it urged members to participate in protests. The group also raised funds for Martin Luther King Jr. and the Southern Christian Leadership Conference.

The decline of liberal Protestantism

In the late 1960s, the mainline Protestant churches that were the backbone of the National Council entered a decades-long decline in membership. The organization’s influence diminished as a result. Indeed, the National Council’s liberal activism was one of the reasons it lost support. In the wake of its campaign against the Vietnam War, many churchgoers perceived it as being too far to the left.

At the same time, conservative Christians established competing organizations. Groups like the National Association of Evangelicals and the Southern Baptist Convention represented the views of Americans who abandoned mainline churches. By the end of the 20th century, evangelical groups supplanted the National Counil in exerting a religious influence in politics.

The National Council’s impact today

The reality of the National Council’s position remains unchanged in 2018. This explains why the organization’s statement on the Kavanaugh confirmation had little effect.

While the National Council encompasses many denominations, its constituent bodies represent a declining share of the religious population. Neither the Roman Catholic Church nor most large evangelical denominations belong to it. More importantly, political leaders do not view it as the voice of religious people as they did in the early 20th century.

Until that underlying reality changes, in my view, public statements will do little to advance the National Council or the religious left for which it speaks.

The Conversation

David Mislin does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Americans spend $70 billion on pets, and that money could do more good

$
0
0
Pet spending in the U.S. is estimated to have exceeded US$72 billion.star5112, CC BY-SA

Sylar, the border collie, has his own mansion along with a trampoline and indoor pool. The dog’s adorable features, along with his notable intelligence, earned his owner’s devotion along with many social media fans.

Sylar’s mansion, where other pets can visit and indulge in expensive spa-like treatments such as massages, drew the media’s attention to the increased spending in China on pet-related services. The Chinese are forecast to spend about US$2.6 billion on their pets by 2019– a 50 percent increase from 2016.

This, however, pales in comparison with what Americans spend on their pets annually. This year alone, pet spending in the U.S. is estimated to exceed $72 billion, which is more than the combined GDP of the 39 poorest countries in the world.

Of course, these expenditures are not distributed equally among all pets. Sylar, like other celebrities, lives in the lap of luxury, while many of his fellow creatures experience hunger, homelessness, abuse and other deprivations.

How are we to think about the ethics of spending so much money on pets when it could be used to alleviate the suffering in the world?

The utilitarian tradition

Ethicists have long grappled with questions of right and wrong. One of the most notable ethical traditions – utilitarianism– has had a lot to say about how other sentient beings, besides humans, should be treated along with how resources ought to be distributed among them.

According to utilitarianism, an action is right if it produces the best overall consequences, out of all possible actions, for all those who are affected. In other words, in its simplest form, the goal is to maximize happiness and to minimize suffering.

Some philosophers, such as Thomas Scanlon, argue that ethics is about what humans “owe to each other.” But utilitarianism broadens the scope of the moral community to include the interests of all sentient beings, including those of nonhuman animals. As Jeremy Bentham, one of the earliest proponents of utilitarianism, wrote in 1789,

“The question is not, ‘Can they reason?’ nor, ‘Can they talk?’ but ‘Can they suffer?’”

The classical utilitarians not only advocated for the interests of nonhuman animals, but also for the interests of all humans, including prisoners and women. Both Bentham and 19th-century philosopher John Stuart Millmade sucharguments centuries before it was fashionable to do so.

This was captured in Bentham’s motto, “Everybody to count for one, nobody for more than one.”

How to create the most good

Bentham’s philosophy later gave rise to Peter Singer’s principle of equal consideration of interests, which states when determining right and wrong, all those whose interests are affected should be included in the ethical decision-making process, and those interests ought to be weighed equally.

In fact, Singer’s equal consideration of interests can be used not only to make a case against racism and sexism, but also against “speciesism”– the idea that the interests of humans count for more than the interests of other species.

It is tempting to think that people arguing for the principle of equal consideration of the interests of animals would be in favor of pet mansions and costumes. But would they?

The answer to this question can be found in Singer’s view called “effective altruism,” which is based on the premise that many affluent people spend a lot of money on nonnecessities such as pet costumes or the latest technological gadget. According to one estimate, about $440 million of pet spending in the U.S. was on Halloween pet costumes alone. If that money was instead donated to a good cause, then more good or utility could be produced.

Effective altruism maintains that people should reflect on how allocation of their resources such as their money and time impact other sentient beings.World Bank Photo Collection, CC BY-NC-ND

When thinking about creating the most good possible, effective altruism maintains that people should reflect on how the allocation of their resources such as their money and time impacts other sentient beings.

Some suggestions endorsed by this approach are to donate to efficient charities that aim to improve global health initiatives such as stopping the spread of diseases like malaria. In fact, some specialists have developed methodologies and lists of recommended charities to help people figure out what cause to support.

Do pets need mansions or Halloween costumes?

Those persuaded by the moral argument behind the effective altruism movement may want to allocate their resources differently.

If a fraction of worldwide pet spending, say 25 percent, was allocated elsewhere – for instance, to mitigating the suffering of millions of farm animals or to preventing malaria by providing mosquito nets – more good could certainly be done.

According to one estimate, Americans spend millions of dollars on Halloween costumes for their pets each year.Andrew Miller, CC BY-NC-ND

As an ethicist, pet owner and vegetarian, I don’t deny that the interests of animals matter, and while Sylar is indeed one privileged pup, his lifestyle comes with costs to others and to the planet itself.

Editor’s note: This piece is part of our series on ethical questions arising from everyday life. We would welcome your suggestions. Please email us at ethical.questions@theconversation.com.

The Conversation

Sandra Woien does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

The mosques that survived Palu's tsunami and what that means

$
0
0
The Floating Mosque of Palu that survived after the earthquake. AP Photo/Tatan Syuflana

In the devastation that followed the earthquake and resulting tsunami in the Indonesian city of Palu in Central Sulawesi, many Muslim religious sites were destroyed.

Two mosques, however, survived, with little to no damage to their structure.

In a province where 85 percent of the 3 million residents are Muslims, the survival of these particular mosques and not others has started a discussion about the very nature of Islam.

Mosques of Palu

I came to know Palu well while doing fieldwork in Central Sulawesi in 1984 as part of my research on “traditional rituals.” Palu is the administrative and cultural hub for the whole Sulawesi province.

Of the 24 mosques, 20 were severely damaged in the tsunami. The worst hit was the Baiturrahman Mosque, where 300 people were killed during evening prayers.

However, the Alkhairaat Mosque, and the Arkham Babu Rahman Mosque, known locally as the Floating Mosque survived. The Floating Mosque dominated the Palu Beach with its dramatic walkway from the shore to mosque. After the tsunami, the mosque’s access from the shore has been cut off and it is now literally floating in Palu Bay.

Though partially submerged, its structure remains intact. Palu residents, commenting on Facebook in the first few days after the tsunami, noted how “it remained miraculously untouched.”

At a time when people are trying to make sense of the death and destruction, the survival of Alkhairaat and Arkham Babu Rahman is seen to be a sign of saintly power and the mercy of Allah. Thousands have turned up to pray at Alkhairaat Mosque and walk reverently past the mosque floating in water.

The mosques that survived

The history of the Floating Mosque is dedicated to the 17th-century founder of Islam in Palu, Datuk Karama. Karama came from the western island of Sumatra and preached Islam to the people of Palu.

The Alkhairaat mosque was erected by a Yemeni merchant Sayyid Idrus Al-Jufri in 1930. Al-Jufri also founded religious schools after discovering upon his arrival that many people did not have basic education. The first school eventually became the Alkhairaat University.

The tombs of Al-Jufri and Datuk Karama are located near their mosques, where people come to seek spiritual guidance. The street where Alkhairaat Mosque is located as well as the airport in Palu have been named after Al-Jufri.

What it means to Palu survivors

In private comments on Facebook’s instant messenger, people have asserted that the Alkhairaat Mosque and the Floating Mosque survived because of the mystical power of the saints who “guard” these mosques.

These comments have revealed tensions between what people refer to as “old Islam” and “reformist Islam.” In Palu, reformist Islam includes beliefs of Salafis and Wahhabis, who want to go back to a purer form of Islam. They see the belief in saints as a “recent” addition to the original Islam that was revealed to Prophet Muhammad in the 7th century A.D.

In fact, during the early 2000s, some of the more radical Wahabi and Salafist sects used extreme, violent methods to convince Central Sulawesians to change their beliefs in the mystical power of saints or “old Islam.”

The educational institutions led by the Alkhairaat Foundation have played a considerable role in fostering the old Islamic beliefs. The foundation runs 43 boarding schools, and 1,700 religious schools across Eastern Indonesia and a large university in Palu. All emphasize tolerance. However, Salafi and Wahabi schools, promoted by Saudi funding in the 1990s, argue that the tolerance taught by Alkhairaat was the “wrong kind of Islam.”

In 2000, Alkhairaat students at a school in Poso, a port town near the southern coast of Central Sulawesi were targeted by terrorists. The region’s 14 percent minority Christians have also been under attack.

Since 2010 there has been no violence, but even as recently as 2016, the Indonesian government has been searching for terrorist cells in the mountain jungles of Central Sulawesi.

Palu’s future

Despite the reformists’ activity, Alkhairaat’s influence in Palu remains strong. As a major philanthropic organization in Palu and beyond, with many graduates of Alkhairaat University serving in government and private sectors, Alkhairaat has helped counter hate rhetoric and actions.

Some of the comments on Facebook reveal survivors’ loyalty to Alkhairaat values. Post-tsunami, however, Alkhairaat’s resources are likely strained, as graduates say in private conversations on Facebook with me.

The question is will this tragedy bring outside funds that once again disturb the internal harmony among Muslims? If so, will Palu sustain its spirit of tolerance?

The Conversation

Jennifer Nourse receives funding from Fulbright and the University of Richmond Faculty Research Committee.

Why the Christian idea of hell no longer persuades people to care for the poor

$
0
0
What was behind early depictions of hell?Erica Zabowski/Flickr.com, CC BY-ND

It’s that time of the year when hell is used as a common theme for entertainment and hell-themed haunted houses and horror moviespop up all over the country.

Although many of us now associate hell with Christianity, the idea of an afterlife existed much earlier. Greeks and Romans, for example, used the concept of Hades, an underworld where the dead lived, both as a way of understanding death and as a moral tool.

However, in the present times, the use of this rhetoric has radically changed.

Rhetoric in ancient Greece and Rome

The earliest Greek and Roman depictions of Hades in the epics did not focus on punishment, but described a dark shadowy place of dead people.

In Book 11 of the Greek epic the “Odyssey,” Odysseus travels to the realm of the dead, encountering countless familiar faces, including his own mother.

Near the end of Odysseus’ tour, he encounters a few souls being punished for their misdeeds, including Tantalus, who was sentenced eternally to have food and drink just out of reach. It is this punishment from which the word “tantalize” originated.

Hundreds of years later, the Roman poet Virgil, in his epic poem “Aeneid,” describes a similar journey of a Trojan, Aeneas, to an underworld, where many individuals receive rewards and punishments.

This ancient curriculum was used for teaching everything from politics to economics to virtue, to students across the Roman empire, for hundreds of years.

In later literature, these early traditions around punishment persuaded readers to behave ethically in life so that they could avoid punishment after death. For example, Plato describes the journey of a man named Er, who watches as souls ascend to a place of reward, and descend to a place of punishment. Lucian, an ancient second century A.D. satirist takes this one step further in depicting Hades as a place where the rich turned into donkeys and had to bear the burdens of the poor on their backs for 250 years.

For Lucian this comedic depiction of the rich in hell was a way to critique excess and economic inequality in his own world.

Early Christians

By the time the New Testament gospels were written in the first century A.D., Jews and early Christians were moving away from the idea that all of the dead go to the same place.

Early Christians portrayed hell through different terms.paukrus/Flickr.com

In the Gospel of Matthew, the story of Jesus is told with frequent mentions of “the outer darkness where there is weeping and gnashing of teeth.” As I describe in my book, many of the images of judgment and punishment that Matthew uses represent the early development of a Christian notion of hell.

The Gospel of Luke does not discuss final judgment as frequently, but it does contain a memorable representation of hell. The Gospel describes Lazarus, a poor man who had lived his life hungry and covered with sores, at the gate of a rich man, who disregards his pleas. After death, however, the poor man is taken to heaven. Meanwhile, it is the turn of the rich man to be in agony as he suffers in the flames of hell and cries out for Lazarus to give him some water.

For the marginalized other

Matthew and Luke are not simply offering audiences a fright fest. Like Plato and later Lucian, these New Testament authors recognized that images of damnation would capture the attention of their audience and persuade them to behave according to the ethical norms of each gospel.

Later Christian reflections on hell picked up and expanded this emphasis. Examples can be seen in the later apocalypses of Peter and Paul– stories that use strange imagery to depict future times and otherworldly spaces. These apocalypses included punishments for those who did not prepare meals for others, care for the poor or care for the widows in their midst.

Although these stories about hell were not ultimately included in the Bible, they were extremely popular in the ancient church, and were used regularly in worship.

A major idea in Matthew was that love for one’s neighbor was central to following Jesus. Later depictions of hell built upon this emphasis, inspiring people to care for the “least of these” in their community.

Damnation then and now

The idea of hell is used to bring about conversions.William Morris

In the contemporary world, the notion of hell is used to scare people into becoming Christians, with an emphasis on personal sins rather than a failure to care for the poor or hungry.

In the United States, as religion scholar Katherine Gin Lumhas argued, the threat of hell was a powerful tool in the age of nation-building. In the early Republic, as she explains, “fear of the sovereign could be replaced by fear of God.”

As the ideology of republicanism developed, with its emphasis on individual rights and political choice, the way that the rhetoric of hell worked also shifted. Instead of motivating people to choose behaviors that promoted social cohesion, hell was used by evangelical preachers to get individuals to repent for their sins.

Even though people still read Matthew and Luke, it is this individualistic emphasis, I argue, that continues to inform our modern understanding of hell. It is evident in the hell-themed Halloween attractions with their focus on gore and personal shortcomings.

These depictions are unlikely to portray the consequences for people who have neglected to feed the hungry, give water to the thirsty, welcome the stranger, cloth the naked, care for the sick or visit those in prison.

The fears around hell, in the current times, play only on the ancient rhetoric of eternal punishment.

The Conversation

Meghan Henning received funding from the Jacob K. Javitts fellowship (U.S. department of education).


Why believing in ghosts can make you a better person

$
0
0
A Halloween ghost.Werner Reischel/Flickr.com, CC BY

Halloween is a time when ghosts and spooky decorations are on public display, reminding us of the realm of the dead. But could they also be instructing us in important lessons on how to lead moral lives?

Roots of Halloween

The origins of modern-day Halloween go back to “samhain,” a Celtic celebration for the beginning of the dark half of the year when, it was widely believed, the realm between the living and the dead overlapped and ghosts could be commonly encountered.

In 601 A.D., to help his drive to Christianize northern Europe, Pope Gregory I directed missionaries not to stop pagan celebrations, but rather to Christianize them.

Accordingly, over time, the celebrations of samhain became All Souls’ Day and All Saint’s Day, when speaking with the dead was considered religiously appropriate. All Saint’s Day was also known as All Hallows’ Day and the night before became All Hallows’ Evening, or “Hallowe’en.”

Christian ghosts

Not only did the pagan beliefs around spirits of the dead continue, but they also became part of many of early church practices.

Pope Gregory I himself suggested that people seeing ghosts should say masses for them. The dead, in this view, might require help from the living to make their journey towards Heaven.

During the Middle Ages, beliefs around souls trapped in purgatory led to the church’s increasing practice of selling indulgences – payments to the church to reduce penalties for sins. The widespread belief in ghosts turned the sale of indulgences into a lucrative practice for the church.

It was such beliefs that contributed to the Reformation, the division of Christianity into Protestantism and Catholicism led by German theologian Martin Luther. Indeed, Luther’s “95 Theses,” that he nailed to the All Saints Church in Wittenburg on Oct. 31, 1517, was largely a protest against the selling of indulgences.

Subsequently, ghosts became identified with “Catholic superstitions” in Protestant countries.

Debates, however, continued about the existence of ghosts and people increasingly turned to science to deal with the issue. By the 19th century, Spiritualism, a new movement which claimed that the dead could converse with the living, was fast becoming mainstream, and featured popular techniques such as seances, the ouija board, spirit photography and the like.

Although Spiritualism faded in cultural importance after World War I, many of its approaches can be seen in the “ghost hunters” of today, who often seek to prove the existence of ghosts using scientific techniques.

A wide, wide world of ghosts

These beliefs are not just part of the Christian world. Most, although not all, societies have a concept of “ghosts.” In Taiwan, for example, about 90 percent people report seeing ghosts.

An elaborate model house is being guided into the ocean as an offering to wandering ghosts during the beginning of the Ghost Month Festival in Taiwan.AP Photo/Chiang Ying-ying

Along with many Asian countries such as Japan, Korea, China and Vietnam, Taiwan celebrates a “Ghost Month,” which includes a central “Ghost Day,” when ghosts are believed to freely roam the world of the living. These festivals and beliefs are often tied to the Buddhist story of the Urabon Sutra, where Buddha instructs a young priest on how to help his mother whom he sees suffering as a “hungry ghost.”

As in many traditions, Taiwanese ghosts are seen either as “friendly” or “unfriendly.” The “friendly” ghosts are commonly ancestral or familial and welcomed into the home during the ghost festival. The “unfriendly” ghosts are those angry or “hungry” ghosts that haunt the living.

Role of ghosts in our lives

As a scholar who has studied and taught ghost stories for many years, I have found that ghosts generally haunt for good reasons. These could range from unsolved murders, lack of proper funerals, forced suicides, preventable tragedies and other ethical failures.

Ghosts, in this light, are often found seeking justice from beyond the grave. They could make such demands from individuals, or from societies as a whole. For example, in the U.S., sightings have been reported of African-American slaves and murdered Native Americans. Scholar Elizabeth Tucker details many of these reported sightings on university campuses, often tied in with sordid aspects of the campus’s past.

A ghost dance on Halloween.Chris Jepsen/Flickr.com, CC BY-NC-ND

In this way, ghosts reveal the shadow side of ethics. Their sightings are often a reminder that ethics and morality transcend our lives and that ethical lapses can carry a heavy spiritual burden.

Yet ghost stories are also hopeful. In suggesting a life after death, they offer a chance to be in contact with those that have passed and therefore a chance for redemption – a way to atone for past wrongs.

This Halloween, along with the shrieks and shtick, you may want to take a few minutes to appreciate the role of ghosts in our haunted pasts and how they guide us to lead moral and ethical lives.

The Conversation

Tok Thompson does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

What history reveals about surges in anti-Semitism and anti-immigrant sentiments

$
0
0
People place flowers at the Tree of Life synagogue in Pittsburgh.AP Photo/Matt Rourke

Leer en español.

The shooting at the Tree of Life Congregation in Pittsburgh is believed to be the deadliest attack on Jews in American history. Eleven people were killed when the gunman burst in on the congregation’s morning worship service carrying an assault rifle and three handguns.

The suspect, Robert Bowers, is reported to be a frequent user of Gab, a social networking site that has becoming increasingly popular among white nationalists and other alt-right groups. He is alleged to have regularly reposted anti-Semitic slurs, expressed virulent anti-immigrant sentiments, called immigrants “invaders,” and claimed that Jews are “the enemy of white people.”

The magnitude of the Pittsburgh synagogue massacre may be unprecedented, but it is only the latest in the series of hate crimes against Jews. In February 2017, more than 100 gravestones were vandalized at a cemetery outside of St. Louis, Missouri, and at another Jewish cemetery in Philadelphia. Indeed, hate crimes have been on an increase against minority religions, people of color and immigrants. In the 10 days following the 2016 presidential election, nearly 900 hate-motivated incidents were reported, many on college campuses. Many of these incidents targeted Muslims, people of color and immigrants, along with Jews.

This outpouring of anti-immigrant and anti-Semitic sentiment is reminiscent in many ways of the political climate during the years between the first and second world wars in the U.S. or the interwar period.

America as the ‘melting pot’

In its early years, the United States maintained an “open door policy” that drew millions of immigrants from all religions to enter the country, including Jews. Between 1820 and 1880, over 9 million immigrants entered America.

As a Jewish studies scholar, I am all too aware that by the early 1880s, American nativists – people who believed that the “genetic stock” of Northern Europe was superior to that of Southern and Eastern Europe – began pushing for the exclusion of “foreigners,” whom they “viewed with deep suspicion.”

Fifty German-Jewish refugee children, ranging in age from 5 to 13, salute the American flag, June 5, 1939.AP Photo

In fact, as scholar Barbara Bailin writes, most of the immigrants, who were from Southern, Central and Eastern Europe, “were considered so different in composition, religion, and culture from earlier immigrants as to trigger a xenophobic reaction that served to generate more restrictive immigration laws.”

In August 1882, Congress responded to increasing concerns about America’s “open door” policy and passed the Immigration Act of 1882, which included a provision denying entry to “any convict, lunatic, idiot or any person unable to take care of himself without becoming a public charge.”

However, enforcement was not strict, in part because immigration officers working at the points of entry were expected to implement these restrictions as they saw fit.

In fact, it was during the late 19th century that the American “melting pot” was born: Nearly 22 million immigrants from all over the world entered the U.S. between 1881 and 1914.

They included approximately 1,500,000 million European Jews hoping to escape the longstanding legally enforced anti-Semitism of many parts of the European continent, which limited where Jews could live, what kinds of universities they could attend and what kinds of professions they could hold.

Fear of Jews and immigrants

Nativists continued to rail against the demographic shifts and in particular took issue with the high numbers of Jews and Southern Italians entering the country.

These fears were eventually reflected in the makeup of Congress, since the electorate voted increasing numbers of nativist congresspeople into office who vowed to change immigration laws with their constituent’s anti-immigrant sentiments in mind.

Immigrants on Ellis Island.Library of Congress Prints and Photographs Division Washington, D.C. 20540 USA

Nativist and isolationist sentiment in America only increased, as Europe fell headlong into World War I, “the war to end all wars.” On Feb. 4, 1917, Congress passed the Immigration Act of 1917, which reversed America’s open door policy and denied entry to the majority of immigrants seeking entry. As a result, between 1918 and 1921, only 20,019 Jews were admitted into the U.S.

The 1924 Immigration Act tightened the borders further. It transferred the decision to admit or deny immigrants from the immigration officers at the port of entry to the Foreign Services Office, which issued visas after the completion of a lengthy application with supporting documentation.

The quotas established by the act also set strict limits on the number of new immigrants allowed after 1924. The number of Central and Eastern Europeans allowed to enter the U.S. was dramatically reduced.

The 1924 quotas provided visas to a mere 2 percent of each nationality already in the U.S by 1890. They excluded immigrants from Asia completely, except for immigrants from Japan and the Philippines. The stated fundamental purpose of this immigration act was to preserve the ideal of U.S. “homogeneity.”

Congress did not revise the act until 1952.

Why does this history matter?

The political climate of the interwar period has many similarities with the anti-immigrant and anti-Semitic environment today.

President Trump’s platform is comprised in large part of strongly anti-immigrant rhetoric. A Pew Charitable Trust survey shows that as many as 66 percent of registered voters who supported Trump consider immigration a “very big problem,” while only 17 percent of Hillary Clinton’s supporters said the same.

Moreover, 59 percent of Trump supporters actively associate “unauthorized immigrants with serious criminal behavior.”

Supporters of President Trump during a campaign rally.Gage Skidmore, CC BY-SA

President Trump’s claims about the dangers posed by immigrants are not be supported by facts; but they do indicate increased isolationism, nativism and right-wing nationalism within the U.S. All over again, we see anti-immigrant sentiment and anti-Semitism, going hand in hand.

This is an updated version of an article originally published on April 2, 2017.

The Conversation

Ingrid Anderson does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

The Dead Sea Scrolls are a priceless link to the Bible's past

$
0
0
A conservator works with a portion of the Dead Sea Scrolls containing Psalm 145 at The Franklin Institute, in Philadelphia.AP Photo/Matt Rourke

The Museum of the Bible in Washington, D.C., has removed five Dead Sea Scrolls from exhibits after tests confirmed these fragments were not from ancient biblical scrolls but forgeries.

Over the last decade, the Green family, owners of the craft-supply chain Hobby Lobby, has paid millions of dollars for fragments of the Dead Sea Scrolls to be the crown jewels in the museum’s exhibition showcasing the history and heritage of the Bible.

Why would the Green family spend so much on small scraps of parchment?

Dead Sea Scrolls’ discovery

From the first accidental discovery, the story of the Dead Sea Scrolls is a dramatic one.

In 1947, Bedouin men herding goats in the hills to the west of the Dead Sea entered a cave near Wadi Qumran in the West Bank and stumbled on clay jars filled with leather scrolls. Ten more caves were discovered over the next decade that contained tens of thousands of fragments belonging to over 900 scrolls. Most of the finds were made by the Bedouin.

Some of these scrolls were later acquired by the Jordanian Department of Antiquities through complicated transactions and a few by the state of Israel. The bulk of the scrolls came under the control of the Israel Antiquities Authority in 1967.

Included among the scrolls are the oldest copies of books in the Hebrew Bible and many other ancient Jewish writings: prayers, commentaries, religious laws, magical and mystical texts. They have shed much new light on the origins of the Bible, Judaism and even Christianity.

The Bible and the Dead Sea Scrolls

Before the discovery of the Dead Sea Scrolls, the oldest known manuscripts of the Hebrew Bible dated to the 10th century A.D. The Dead Sea Scrolls include over 225 copies of biblical books that date up to 1,200 years earlier.

These range from small fragments to a complete scroll of the prophet Isaiah, and every book of the Hebrew Bible except Esther and Nehemiah. They show that the books of the Jewish Bible were known and treated as sacred writings before the time of Jesus, with essentially the same content.

On the other hand, there was no “Bible” as such but a loose assortment of writings sacred to various Jews including numerous books not in the modern Jewish Bible.

Two men stand on the foundations of the ancient Khirbet Qumran ruins, which lie on the northwestern shore of the Dead Sea in Jordan, in 1957. The ruins are above the caves in which the Dead Sea Scrolls were discovered in 1947.AP Photo

Moreover, the Dead Sea Scrolls show that in the first century B.C. there were different versions of books that became part of the Hebrew canon, especially Exodus, Samuel, Jeremiah, Psalms and Daniel.

This evidence has helped scholars understand how the Bible came to be, but it neither proves nor disproves its religious message.

Judaism and Christianity

The Dead Sea Scrolls are unique in representing a sort of library of a particular Jewish group that lived at Qumran in the first century B.C. to about 68 A.D. They probably belonged to the Essenes, a strict Jewish movement described by several writers from the first century A.D.

The scrolls provide a rich trove of Jewish religious texts previously unknown. Some of these were written by Essenes and give insights into their views, as well as their conflict with other Jews including the Pharisees.

The Dead Sea Scrolls contain nothing about Jesus or the early Christians, but indirectly they help to understand the Jewish world in which Jesus lived and why his message drew followers and opponents. Both the Essenes and the early Christians believed they were living at the time foretold by prophets when God would establish a kingdom of peace and that their teacher revealed the true meaning of Scripture.

Fame and forgeries

The fame of the Dead Sea Scrolls is what has encouraged both forgeries and the shadow market in antiquities. They are often called the greatest archaeological discovery of the 20th century because of their importance to understanding the Bible and the Jewish world at the time of Jesus.

Religious artifacts especially attract forgeries, because people want a physical connection to their faith. The so-called James Ossuary, a limestone box, that was claimed to be the burial box of the brother of Jesus, attracted much attention in 2002. A few years later, it was found that it was indeed an authentic burial box for a person named James from the first century A.D., but by adding “brother of Jesus” the forger made it seem priceless.

Scholars eager to publish and discuss new texts are partly responsible for this shady market.

The recent confirmation of forged scrolls at the Museum of Bible only confirms that artifacts should be viewed with highest suspicion unless the source is fully known.

The Conversation

Daniel Falk does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Religion and refugees are deeply entwined in the US

$
0
0
Rally organized by HIAS, a Jewish group that supports refugees, outside the White House.Ted Eytan/Flickr, CC BY-NC-SA

Leer en español.

Robert Bowers lashed out at what he believed to be a Jewish plot to bring more refugees and asylum seekers to the U.S. before allegedly murdering 11 people at the Tree of Life synagogue in Pittsburgh.

Bowers’s claim that HIAS, a prominent Jewish humanitarian organization, was bringing migrants from Honduras, El Salvador and Guatemala northward to commit violence was false. But it is true that many religious communities in the U.S., including American Jews, have long supported refugees and asylum-seeking migrants who arrive in the U.S.

In my research on the nonprofits that resettle and assist these newcomers, I’ve found that while religious communities continue to do this work through faith-based nonprofits and individual congregations, there are signs that some white Christians no longer support this mission.

Religious advocacy on behalf of refugees

The idea of welcoming the stranger is central to Christianity, Judaism and Islam. It originally arose from cultures born in deserts where leaving someone outside the city gates could be a death sentence. Religious leaders of those faiths often connect that ethic to a responsibility to shield refugees and other immigrants from violence and oppression.

Starting in the late 19th century, and during the Holocaust, faith communities appealed to the U.S. government to welcome Jews seeking safety from persecution. They also advocated for allowing Armenians, who were murdered en masse by leaders of the Ottoman Empire, to immigrate to America.

After World War II, an alliance between Protestant, Catholic and Jewish organizations finally swayed policymakers to adopt a more humanitarian-focused U.S. foreign policy. The U.S. then joined with other nations to sign the 1951 Geneva Convention, a U.N. agreement that established the rights of refugees to legal protection.

Among the convention’s main tenets is a global ban on sending refugees back to countries where they will be unsafe. This sometimes requires resettling refugees in a safer country. Faith-based organizations have been partnering with the U.S. government ever since.

It’s in the Bible.Victoria Pickering/Flickr, CC BY-SA

The sanctuary movement

Between 1951 and 1980, the government resettled refugees in the U.S. on an ad hoc basis without spending much on assisting them. During this time, faith-based organizations filled in gaps to ensure refugees got off to a good start in the U.S.

Religious groups also advocated for asylum seekers, people who arrived seeking protection without first getting refugee status. Between 1980 and 1991, almost 1 million Central Americans crossed the U.S. border seeking asylum. From the start, the government denied most of their petitions.

Many Christian and Jewish leaders advocated on behalf of these migrants. They preached sermons, lobbied the government and organized protests calling for protecting Central Americans asylum seekers. Hundreds of religious communities provided sanctuary, usually inside houses of worship, and gave them legal support.

In 1985 the Center for Constitutional Rights sued the federal government on behalf of the American Baptist Church, Presbyterian Church USA, the Unitarian Universalist Association, the United Methodist Church and four other religious organizations, claiming discrimination against Salvadoran and Guatemalan asylum seekers. The government later settled the class action lawsuit.

Faith-based nonprofits supporting refugees today

Ever since Congress passed the 1980 Refugee Act, creating the current system of refugee resettlement, U.S. faith-based organizations have played a central role in it.

There are nine national voluntary agencies that work directly with the government, six or which are faith-based. One is Jewish, one Catholic, one evangelical Christian and three are mainline Protestant. These groups arrange for refugees to find housing, land jobs and enroll in English classes. They do so regardless of the newcomers’ own religions or their countries of origin.

In my research, I have found that staff at faith-based organizations commonly use religious rhetoric to justify their work and to describe their commitment to that work.

At the same time, religiously based refugee organizations frame their efforts using interfaith language. They invoke the ethical imperative to provide asylum and refuge in ways that cross-cut multiple religious traditions as they collect and disburse money and household goods – and mobilize volunteers.

I found that staff at faith-based organizations use religious rhetoric in ways that are explicitly inclusive, being careful not to exclude refugees from other religions.

“The Jewish dimension is helping people realize that America is a place that welcomes all, and helping people that have come from a land where maybe sometimes being a Jew was considered worse than dirt,” a director of a local office of HIAS, formerly known as the Hebrew Immigrant Aid Society, told me. “Do we apply those same kinds of principles to other communities that we help? Absolutely.”

The director of a Catholic Charities office echoed that sentiment. “We have a saying,” he told me. “We help people not because they are Catholic, but because we are.”

Most Americans in this movement support organizations that share their own faith. But these nonprofits also form interfaith networks that support refugees and asylees. Secular groups do this charitable work as well.

Changing religious politics

Despite the deep foundation of religious belief and morality for granting asylum, this connection may be fraying, at least for some communities. In response to a record number of people displaced globally, especially nonwhites and Middle Easterners, I’m seeing signs that the moral framework supporting asylum is giving way in some quarters to support for restrictive policies that avoid any moral or international obligation to asylum seekers.

This became clear when the Trump administration enacted its “zero tolerance” policy, which it said justified arresting anyone crossing the border without documentation – including people with babies and toddlers. Government agents and contractors subsequently separated more than 2,650 children from their parents, sparking outrage.

Many faith leaders spoke out against the child separation, like prominent evangelical leader Franklin Graham, without directly criticizing aggressive border enforcement. But many have been more pointed in their comments, specifically calling out immigrant exclusion as antithetical to their religious beliefs. Mainline Protestant, Jewish, Mormon, Catholic and evangelical Christian groups all released statements against tighter immigration enforcement itself.

What is unusual is that some conservative Christian groups have begun to lobby in favor of strengthened immigration enforcement. This is a break from the past. While there have been theological differences between conservative and mainline Protestant Christians on a number of issues, welcoming the stranger had been one point upon which Christians generally agreed.

Race and racial politics are intertwined with this split. There now appears to be an inverse relationship in the U.S. between religiosity and support for asylum among white Americans. In a Pew survey conducted in May 2018, only 43 percent of white Protestants and 25 percent of white evangelical Christians thought that the U.S. had a responsibility to accept refugees.

Conversely, 63 percent of black Protestants and 65 percent of the religiously unaffiliated thought that the nation has that responsibility. Another recent Pew survey, which didn’t sum up the perspectives of Jews or Muslims, showed broad support for a border wall among white evangelicals and mainline Protestants, and opposition among black Protestants, Hispanic Catholics and the religiously unaffiliated.

It’s unclear whether this represents a downward trend in white Christian support for people seeking safe haven. Following the mass murder of Pittsburgh Jews, it will be important to watch changes in attitudes toward refugees and other migrants.

The Conversation

Stephanie J. Nawyn previously received funding from the U.S. Department of State Fulbright Program, and dissertation funding through the University of Southern California provided by the Haynes Foundation and Pew Charitable Trusts Foundation.

Evangelical Christians are racially diverse – and hold diverse views on immigration

$
0
0
Evangelicals of color are among the fastest growing segments of the American population.AP Photo/Tina Fineberg

The influence of white evangelicals on American politics is well known. More than 80 percent supported Donald Trump in the 2016 election. But two of the fastest-growing segments of the American population – Latino and Asian-American voters – also are part of evangelical America.

What will drive their votes in the upcoming midterms?

Voting patterns

In my book, “Immigration, Evangelicals and Politics in an Era of Demographic Change,” I look at the tenacious hold of white evangelicals on political power, despite a dramatic decline in their numbers over the past decade.

At least 1 out of 4 voters in the past three election cycles has been a white evangelical, even though they currently constitute only 17 percent of the total American population.

Given that white evangelicals account for the largest religious group in some of the states hosting the most competitive house races this November, including Kansas, Virginia and North Carolina, it is important to first consider the role of religiously framed issues such as abortion.

Similar religious values

Evangelicals during a mass prayer rally in Boston, Massachusetts.AP Photo/Elise Amendola

Popular assumptions contend that religious values are what set white evangelicals apart from others. A 2015 Public Religion Research Institute study suggests that white evangelicals are more likely than other Americans to express dismay that the U.S. is no longer a “Christian nation.”

However, it is also the case that white evangelicals do not have a monopoly on religious values. In fact, Asian-American evangelicals report higher levels of church-going and fundamentalist beliefs than their white counterparts.

And, on issues such as abortion, my research shows that Latino evangelicals express more conservative attitudes than their white counterparts. In the 2008 Collaborative Multiracial Post-Election Survey, for example, 76 percent of Latino evangelical voters opposed making abortion legal, compared to 72 percent of white evangelicals.

And yet despite what appears to be higher levels of religious commitment, far fewer Asian-American and Latino evangelicals supported Trump compared to white evangelicals.

In my book, I describe how less than 40 percent of registered evangelical Asian-Americans and less than 30 percent registered Latino evangelicals reported voting for Trump in 2016.

Immigration and race issues

What I argue is that attitudes toward immigration, more than religion, matter for white evangelicals’ political attitudes and vote choice.

It is true that over the past decade, for example, a large number of white evangelical leaders have publicly stated their support for a path to citizenship for undocumented immigrants. They have also voiced support for extending a program that allows young undocumented immigrants who arrived in the U.S. as children protection from deportation.

However, what I argue is that the rank-and-file of white evangelicals in the U.S. have only become more conservative on these issues.

In fact, while support for a path to citizenship for undocumented immigrants had slowly been increasing among white evangelicals for the five years from 2009 to 2013, it dipped in 2014.

A study by PRRI and Brookings Institution, found that from 56 percent, the support dropped to 48 percent. This downward trend is notable, since the change for the U.S. population as a whole was negligible – from 63 percent to 62 percent support. These results were based on interviews conducted in 2013 about their immigration views and re-interviews with the same set of people in 2014.

Recent polling by PRRI shows that white evangelicals continue to be the most conservative of all major racial and religious groups on immigration issues ranging from policies to restrict the number of refugees entering the U.S. to separating families at the border.

There are stark differences on immigration issues between white and non-white evangelicals. The survey of more than 10,000 people I use in my book shows that in 2016, Latino, black and Asian-American evangelicals were half as likely to say immigrants have a negative effect on the economy compared to white evangelicals.

These differences remained even after I accounted for party identification and economic status.

Role of voter turnout

Who will come out to vote matters. NextGen America campus organizer Simone Williams talks with Grace Austin, a junior at the University of Wisconsin, about how to register to vote in Madison.AP Photo/Scott Bauer

In modern times, midterm elections garner lower levels of participation than presidential contests. Political scientists contend this is due to lower interest among potential voters.

Further, voters in midterm elections tend to be each party’s most reliable voters – those who have both high interest and strong voting histories. Research also shows that these reliable voters are generally both older, whiter and more conservative than less frequent voters. And conservative immigration policies are at the heart of the GOP agenda in the upcoming midterms.

On the other hand, the voting rate among Latino and Asian-Americans has been lower than black and whites. Part of the reason could be that some of them may not be eligible to vote due to their citizenship status, or not being of voting age. But the other issue is not being registered as voters.

In my view, conservative white evangelicals are not going to stay home on Nov. 6. While Latino and Asian-American evangelicals are increasing in number, they have yet to attain the same levels of political power.

The Conversation

Janelle Wong is a Public Fellow at PRRI. She received funding from the Russell Sage Foundation https://www.prri.org/about/our-team/#tab-fellows.

Viewing all 1346 articles
Browse latest View live




Latest Images