Dire Consequences of Suicide in the European Middle Ages

Dire Consequences of Suicide in the European Middle Ages

We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

The cycle of life and death is an eternal and unchanging truth of human history. Yet, the attitudes around both are influenced and shaped by a number of factors. Today, dying of old age is seen as a graceful extension of the natural cycle of life and death, but dying early, either through suicide or euthanasia, has a different set of attitudes attached to it. Modern attitudes about suicide actually emerged from medieval socio-cultural and religious beliefs. Suicide, or self-murder, was only mentioned in official records at the turn of the millennium, from 1000 AD onwards.

Today, the conversation around suicide has acquired greater degrees of empathy, as seen through the prism of psycho-social and mental wellbeing (particularly, the absence of it). Yet, Australian religious scholar Professor Carole M. Cusack’s research indicates that it was religion that controlled this “medieval” attitude towards suicide. Criminal justice systems, even secular ones, were influenced by theology, and followed soon thereafter in medieval Europe.

In the view of the Christian church, suicide was a sin and usually the sinner and his family were punished. And no self-murder victim was allowed to be buried with other good Christian souls in a cemetery. ( PeskyMonkey / Adobe Stock)

Christianity And Its Views On Suicide As A Sin

Christian morality and the role of Judas are heavily intertwined with the development of the idea of suicide as sin. According to all the four canonical gospels, Judas Iscariot was one of the 12 disciples of Jesus Christ. Judas’ betrayal ultimately started the chain of events leading to Jesus’ crucifixion.

To prevent the crucifixion, Judas attempted to return the money he had taken to reveal Jesus’ identity. Failure to do so pushed him into committing suicide by hanging. Over time, Judas’ name began being associated with betrayal and backstabbing, with as much disgust as Brutus’ betrayal of Caesar.

  • Staked Through the Heart and Buried at the Crossroads – The Profane Burial of Suicides
  • Petitioning for Death: Did Ancient Romans Really Ask for Permission to Commit Suicide?

One of the earliest documented views on suicide in Christianity are those by Augustine of Hippo , in the City of God (413-426 AD). His interpretation of the Sixth Commandment “Thou shalt not kill” was seen to encompass the self. He saw it as a “detestable and damnable wickedness,” equating it with murder. Even in a situation where a Christian feared for their life in getting corrupted, or raped, Augustine thought it to be unthinkable to consider suicide as an option.

He went as far as condemning the views of earlier Roman philosophers and statesmen, like Cato, Seneca and Lucan who preached the noble virtuosity of suicide under exceptional circumstances. In the vast depths of Christian theological history, Augustine became the first to conflate suicide with sin. Juridically, the persecution of those who commit suicide came in the 6 th century AD onwards, the second century of the so-called Dark Ages or Middle Ages .

Medieval “feudal” justice, beginning in the 10th century AD, was severe on suicides. And the ramifications of property ownership in the case of serfs who killed themselves was imposed. ( cranach / Adobe Stock)

Suicide and Medieval Justice

Between the 10 th and 12 th centuries in many parts of Europe, self-murder became a felony crime. Pre-industrial Europe, prior to becoming a vast imperial power, was not just under the influence of the Church, but also, feudalism. The propertied nature of the “Lord” and “Serf” relationship meant that the master saw a peasant’s suicide as a denial of his possession. Confiscation of the serf’s goods was seen as a legitimate action of claiming what was anyway “the Lord’s property.”

The confiscation of land and property, either by the overlord or the monarch, only increased statist power. With an increase in authoritarian control, punishment became harsher. In England, the “Customs of Anju and Maine” of 1411, equated suicide with rape and murder. In France, in the same period, laws called for the house of the suicide victim to be torn down and the sinner’s family to be banished. The victim’s body, if male, was to be hanged again in the gallows, and then burnt. Even “post-mortem torture” was seen as a legitimate form of punishing suicide, particularly by invoking the fear of self-murder into the living.

Suicide was also connected with bad luck, superstition, and older folkloric views across medieval Europe until the 20th century, and even today. (Édouard Manet / )

Punishment of Suicide in the Afterlife

Those who committed suicide became the subject of gossip and folklore, often accused of upsetting the balance of order in nature. In Switzerland, for example, a bout of bad weather was blamed on the burial of a woman in town who’d committed suicide. Punishment of suicide in the afterlife was enshrined in the law too. For example, in England, in 740 AD, the Archbishop of York drew up legislation ordering priests to not give Christian burials to those who’d died by suicide.

  • The Honorable Death: Samurai and Seppuku in Feudal Japan
  • The Aokigahara Forest of Japan: Many Enter, But Few Walk Out Alive

Such laws only enhanced stigma and myth, made at the cost of the deceased, and the living family members. To protect their families from social exile, figuratively, family members would often try to influence the coroner’s report in suicide cases. If that failed, attempts were made to hide some of their possessions, so that these were protected from the state.

In situations where a married male committed suicide, this was often the case, the widow would be left nothing by the state. In many other cases, in a bid to hush-up the matter, family members would attempt to bury the deceased themselves.

We explain what goodness is, how it has changed throughout history and why it is a value. Also, goodness in the Bible.

Goodness is, as most dictionaries define it, the quality of how good someone is, that is, the natural tendency to spontaneously do good or, at least, the resistance to do evil. Obviously, this word comes from "good", which in Latin was bonus, and in turn came from duonos, "Efficient" or "correct".

Kindness today is a complex moral concept, in which other notions such as generosity, kindness, respect, consideration, empathy, gentleness, loyalty, honesty and responsibility have a place. This is because the very notion of "the good" has varied immensely throughout history, as did cultures and religions, that is, socially valued codes of ethics and conduct.

For example, in Ancient Greece it was held that the good should always be beautiful and true at the same time, thus distinguishing it from pleasure and associating it with virtue, that is, with the harmonious and balanced. For this reason, according to the classical philosophers, human behavior should be governed by what is proportionate, that is, by what is measured.

Thus, the Greeks did not speak of goodness, but of eudaimonia, term translatable as "happiness" or "prosperity", the state of greatest satisfaction of the human being. In addition, they linked it in different ways to the earring or virtue, and to the phronesis or practical wisdom.

However, the vision of goodness that reigned in the West was strongly determined by Christianity, whose precepts were law throughout the European Middle Ages. For Christianity, this was determined by God, whose will ruled the universe, but at the same time gave human beings a free will that they could use to do good or to do evil.

Said notion of good It was revolutionary, especially because it democratized virtue. In the pre-Christian world, where nobles and aristocrats were born virtuous and slaves were born dishonored, the possibilities of doing good were not the same.

Instead, according to the Christian creed, all human beings are made in the image and likeness of God, and we are the fruit of original sin itself, so that we define ourselves morally more by our actions than by our origins.

The latter was key to the modern idea of ​​goodness, as the German philosopher Immanuel Kant (1724-1804) later argued, according to which the good cannot be judged without taking into account the will of the individualSince, if we were forced to act in a certain way, the very notions of good and bad would be lost, since there would be no alternative.

To do good is, therefore, to choose to do good, and especially when there is no immediate reward to receive, that is, when we do not gain anything with such a decision.

Ilse Koch

The rise of the Nazi (National Socialist) Party in Germany had led to the complete breakdown of democracy in that country. By 1934, opposing the Nazis in Germany was a dangerous position to take, and any critics of the regime could find themselves imprisoned in a concentration camp. The first Nazi camp had opened at Dachau in 1933, the intention being to use it to gather political prisoners such as communists so as not to overburden the existing prison system. The escalation of the Nazis enforcing their ideology on the population led to more groups being rounded up and incarcerated, such as Roma gypsies and homosexuals. By the time World War II broke out in 1939, the camps were also being used to house and often kill Jews, whose total annihilation was one of the Nazis’ main aims.

From early on in their existence, the concentration camps were administered by the SS (Schutzstaffel, meaning ‘security squad’). The SS were an elite paramilitary group under the leadership of Heinrich Himmler. They had sworn to protect the Nazi Party and rid Germany of all people deemed ‘undesirable’, and were effectively put in charge of law enforcement in the Third Reich. Himmler expected the SS to live up to certain standards of behaviour, even while allowing them to carry out all manner of terrible acts – an individual officer stealing or taking justice into his own hands was not permitted.


Ilse Koch was born Margarete Ilse Köhler in Dresden, Germany, on 22 September 1906. Her father was a former soldier turned factory foreman and her mother a housewife. Little is known about her childhood, but perhaps surprisingly, given her later crimes, it is reported to have been a happy and normal one. After leaving school at 15, Ilse trained in accountancy and bookkeeping before getting a job working in the office of a factory.

During the 1920s, Germany was in turmoil as a result of World War I. The country’s economy was in dire straits and this created an opportunity for more extreme political factions to win support. The National Socialist (Nazi) Party was founded in 1920. It was against the Treaty of Versailles that had imposed punitive reparations on Germany after its defeat in 1918, and was violently opposed to communists and Jews, the latter of which it saw as running global capitalism and profiting from the war. At first an insignificant political organisation among many others, as the economic depression persisted through the decade the Nazis became more popular. Its supporters saw it as a way of rebuilding Germany’s strength, and therefore bringing better employment opportunities for themselves and their families. By the early 1930s the Nazis had won seats in the German parliament, the Reichstag, and the political establishment were taking notice. Experienced politicians thought that they could manipulate the Nazis and their leader, Adolf Hitler, to win popular support and increase their own power. Their misjudgement allowed the Nazis to become the leading party in the Reichstag, form which position they began to take control of the state and eliminate their opponents. The first concentration camp, Dachau, was opened in 1933 to house political prisoners.

Ilse Koch had joined the Nazi Party in 1932 and began working as a secretary within the party. In 1934 she met Karl-Otto Koch, a convicted fraudster who had gone through a number of jobs before joining the SS, the elite paramilitary wing of the Nazi Party. The young woman recognised that the upwardly mobile Koch, ten years her senior, could give her power and privilege that she had never experienced in her life up to that point. The two began a relationship.

Koch served at several of the early concentration camps before being assigned to the newly established Sachsenhausen camp as commandant in 1936. This camp was sited close to Berlin and became the training ground for SS officers who would then go on to other camps further afield. It was also the testing location for the most ‘efficient’ forms of executing prisoners.

Ilse went with Karl to Sachsenhausen and became an Aufseherin (a female SS guard). These female guards were recruited to supervise female prisoners, of which there were up to 2000 at Sachsenhausen. Karl Koch pleased his SS superiors with the way he ran Sachsenhausen, and he was soon selected for a prestigious transfer – to help build and then run a new concentration camp called Buchenwald, near the city of Weimar.

Heinrich Himmler expected his SS officers to live up to the highest standards of his morality, which included marrying a suitable woman and having children who could be raised as ideal citizens of the Reich. Ilse Köhler’s ancestry was checked to make sure that her lineage was suitably ‘pure’ as an SS bride. Passing the test, Ilse wed Karl in a torchlit SS ceremony before beginning married life at Buchenwald.

The Kochs milked their new positions for all they were worth. A luxurious villa was built next to the camp for them, and there was even a zoo for the diversion of all the SS officers and their families. Despite the barbaric way in which he allowed his prisoners to be treated, Karl Koch enforced strict discipline on SS officers who mistreated the animals in the zoo.

When Jewish prisoners started to come to Buchenwald in large numbers in 1938, a whole new stream of income opened up for the corrupt commandant and his wife. They took the prisoners’ money and valuables for themselves, accumulating a great deal of wealth. Ilse Koch spent 200,000 Reichsmarks, about £850,000 in today’s money, on building an indoor arena for the sole purpose of riding her horses.

As the wife of the commandant, Ilse was expected to be a good housewife and mother, and have a cosy home waiting for her husband every evening when he returned from his duties. But this was not enough for Ilse. At Buchenwald she was able to indulge her most amoral tendencies almost without consequences. As well as using the prisoners’ belongings to enrich herself, she also demonstrated a deeply sadistic side. Witnesses later reported that she would wear revealing clothes around the prisoners, and if one of them even glanced at her, she would note his number and report him to her husband for a severe beating. She also conducted affairs with at least two married SS officers in the camp. But this was not what Ilse Koch became notorious for.

At Buchenwald, a Dr. Wagner was writing a scientific study about tattoos, and how much they could be associated with criminal behaviour. Buchenwald, particularly in its early years, contained many criminals, and so was suitable for a study of this kind. Ilse Koch was said to keep her eyes out for any particularly interesting tattoos on prisoners so that she could tell Dr Wagner about them. The prisoners in question would then be taken for examination and study, and reportedly were never seen again. Rumours flew round the camp that Ilse Koch ordered the doctor to use the tattooed skin to make household curios for her: book covers, lampshades, even gloves and a handbag.

Karl Koch’s reign at Buchenwald came to an end in 1941, when at least some of his nefarious deeds caught up with him. An investigation by the SS revealed the scale of his theft from the prisoners of Buchenwald – the problem was not that he had done it, but that he had kept too much for himself instead of declaring it to his superiors. As punishment, Koch was transferred to work at the death camp at Majdanek, but Ilse remained behind at Buchenwald. The marriage by this point had broken down, although there would be no question of divorce as it went against the ethos of the SS.

This was not the end of the investigation into the Kochs’ activities. The head of the SS in the Buchenwald area, Josias, Hereditary Prince of Waldeck and Pyrmont, had a personal dislike of Karl Koch, who came from a much lower-class background than he. He had noticed a name he knew on the death lists of Buchenwald (the SS scrupulously kept records of every death) – a medical man who had once treated him. Ordering the case to be looked into, the prince came to believe that Karl Koch had engineered the deaths of this man, Walter Krämer, and two others at Buchenwald to conceal the fact that he had been diagnosed with the sexually transmitted disease syphilis.

In August 1943, both Karl and Ilse Koch were arrested, he for incitement to murder and embezzlement and she for theft. The prosecutor, Georg Morgen, presented evidence showing that the Kochs’ life of luxury far outstripped that which should have been manageable on their salaries. In December 1944, Karl Koch was found guilty and sentenced to death, but his wife Ilse was acquitted. Morgen was convinced of her guilt, but there was not enough evidence directly linking her to the alleged crimes.

Karl Koch was shot by an SS firing squad on 5 April 1945, ironically back at Buchenwald. The camp was liberated by advancing U.S. troops just days later. Ilse, meanwhile, had gone to live in Ludwigsburg with her two children. But she would find herself incarcerated again just after the end of the war in Europe, in June 1945, after she was recognised by a former Buchenwald prisoner. This time she was in the custody of the American military. Along with 30 other people formerly associated with the Buchenwald camp, she faced trial over several months in 1947. The world’s media followed the trial of Ilse Koch with almost salacious interest. The legend of the ‘Lady of the Lampshade’ had horrified and fascinated people when it emerged in the final days of the war. Ilse, the only woman among the defendants at the trial, became a symbol of the cruelty of Buchenwald and even the entire concentration camp system. However, no evidence was presented at her trial that directly proved the grisly stories true.

Ilse Koch was found guilty under the ‘common design’ principle, in which all defendants were considered to have committed crimes just by being part of the Buchenwald administration. She escaped the death penalty because she was by then seven months’ pregnant, having conceived a child with a fellow prisoner while awaiting trial. Following the trial, the American authorities revisited the convictions and reduced many of the sentences they had handed down, including Ilse’s. This was because many of the victims at Buchenwald had been German, and the American military no longer felt that they should have had jurisdiction over the crimes. Ilse Koch was released in 1951 to an outcry from the general public in the U.S. However, she was immediately rearrested by the West German authorities, and put on trial for a third time.

As she had all along, Ilse denied any knowledge of the crimes of which she was accused. She professed only to have been a housewife at Buchenwald. During the trial at Augsburg, she twice collapsed in court and had to be removed. She was sentenced in her absence to life in prison. Over the following years she repeatedly tried to appeal her conviction, but was unsuccessful. Despite being reunited with her son, who had been taken at his birth in 1947 to be put into foster care, Ilse Koch committed suicide in Aichach prison on 1 September 1967.


In the early 1930s, the National Socialist (Nazi) Party took power in Germany. They immediately began building a series of concentration camps, where they imprisoned opponents of their regime and later groups they believed to be subhuman, like homosexuals, Roma travellers, and Jews.


Suicide, from Latin suicidium code: lat promoted to code: la , is "the act of taking one's own life". [9] [34] Attempted suicide or non-fatal suicidal behavior is self-injury with at least some desire to end one's life that does not result in death. [35] [36] Assisted suicide is when one individual helps another bring about their own death indirectly via providing either advice or the means to the end. [37] This is in contrast to euthanasia, where another person takes a more active role in bringing about a person's death. [37] Suicidal ideation is thoughts of ending one's life but not taking any active efforts to do so. [35] It may or may not involve exact planning or intent. [36] In a murder-suicide (or homicide-suicide), the individual aims at taking the lives of others at the same time. A special case of this is extended suicide, where the murder is motivated by seeing the murdered persons as an extension of their self. [38] Suicide in which the reason is that the person feels that they are not part of society is known as egoistic suicide. [39]

The normal verb in scholarly research and journalism for the act of suicide is commit. [40] [41] Some advocacy groups recommend using the terms completed suicide, took his/her own life, died by suicide, or killed him/herself instead of committed suicide. [42] [43] [44] [45] [46] The Associated Press Stylebook recommends avoiding "committed suicide" except in direct quotes from authorities. [47] Opponents of commit argue that it implies that suicide is criminal, sinful, or morally wrong. [48]

Factors that affect the risk of suicide include mental disorders, drug misuse, psychological states, cultural, family and social situations, genetics, experiences of trauma or loss, and nihilism. [50] [51] [15] Mental disorders and substance misuse frequently co-exist. [52] Other risk factors include having previously attempted suicide, [22] the ready availability of a means to take one's life, a family history of suicide, or the presence of traumatic brain injury. [53] For example, suicide rates have been found to be greater in households with firearms than those without them. [54]

Socio-economic problems such as unemployment, poverty, homelessness, and discrimination may trigger suicidal thoughts. [55] [56] Suicide might be rarer in societies with high social cohesion and moral objections against suicide. [36] About 15–40% of people leave a suicide note. [57] War veterans have a higher risk of suicide due in part to higher rates of mental illness, such as post traumatic stress disorder, and physical health problems related to war. [58] Genetics appears to account for between 38% and 55% of suicidal behaviors. [59] Suicides may also occur as a local cluster of cases. [60]

Most research does not distinguish between risk factors that lead to thinking about suicide and risk factors that lead to suicide attempts. [61] [62] Risks for suicide attempt rather than just thoughts of suicide include a high pain tolerance and a reduced fear of death. [63]

Mental illness

Mental illness is present at the time of suicide 27% to more than 90% of the time. [64] [22] [65] [66] Of those who have been hospitalized for suicidal behavior, the lifetime risk of completed suicide is 8.6%. [22] [67] Comparatively, non-suicidal people hospitalized for affective disorders have a 4% lifetime risk of suicide. [67] Half of all people who die by suicide may have major depressive disorder having this or one of the other mood disorders such as bipolar disorder increases the risk of suicide 20-fold. [68] Other conditions implicated include schizophrenia (14%), personality disorders (8%), [69] [70] obsessive compulsive disorder, [71] and posttraumatic stress disorder. [22] Those with autism spectrum disorders also attempt and consider suicide more frequently. [72]

Others estimate that about half of people who complete suicide could be diagnosed with a personality disorder, with borderline personality disorder being the most common. [73] About 5% of people with schizophrenia die of suicide. [74] Eating disorders are another high risk condition. [75]

Among approximately 80% of completed suicides, the individual has seen a physician within the year before their death, [76] including 45% within the prior month. [77] Approximately 25–40% of those who completed suicide had contact with mental health services in the prior year. [64] [76] Antidepressants of the SSRI class appear to increase the frequency of suicide among children but do not change the risk among adults. [78] An unwillingness to get help for mental health problems also increases the risk. [60]

Previous attempts and self-harm

A previous history of suicide attempts is the most accurate predictor of completed suicide. [22] Approximately 20% of suicides have had a previous attempt, and of those who have attempted suicide, 1% complete suicide within a year [22] and more than 5% die by suicide within 10 years. [75] Acts of self-harm are not usually suicide attempts and most who self-harm are not at high risk of suicide. [79] Some who self-harm, however, do still end their life by suicide, and risk for self-harm and suicide may overlap. [79]

Psychosocial factors

A number of psychological factors increase the risk of suicide including: hopelessness, loss of pleasure in life, depression, anxiousness, agitation, rigid thinking, rumination, thought suppression, and poor coping skills. [68] [80] [81] A poor ability to solve problems, the loss of abilities one used to have, and poor impulse control also play a role. [68] [82] In older adults, the perception of being a burden to others is important. [83] Those who have never married are also at greater risk. [22] Recent life stresses, such as a loss of a family member or friend or the loss of a job, might be a contributing factor. [68] [60]

Certain personality factors, especially high levels of neuroticism and introvertedness, have been associated with suicide. This might lead to people who are isolated and sensitive to distress to be more likely to attempt suicide. [80] On the other hand, optimism has been shown to have a protective effect. [80] Other psychological risk factors include having few reasons for living and feeling trapped in a stressful situation. [80] Changes to the stress response system in the brain might be altered during suicidal states. [36] Specifically, changes in the polyamine system [84] and hypothalamic–pituitary–adrenal axis. [85]

Social isolation and the lack of social support has been associated with an increased risk of suicide. [80] Poverty is also a factor, [86] with heightened relative poverty compared to those around a person increasing suicide risk. [87] Over 200,000 farmers in India have died by suicide since 1997, partly due to issues of debt. [88] In China, suicide is three times as likely in rural regions as urban ones, partly, it is believed, due to financial difficulties in this area of the country. [89]

The time of year may also affect suicide rates. There appears to be a decrease around Christmas, [90] but an increase in rates during spring and summer, which might be related to exposure to sunshine. [36] Another study found that the risk may be greater for males on their birthday. [91]

Being religious may reduce one's risk of suicide while beliefs that suicide is noble may increase it. [92] [60] [93] This has been attributed to the negative stance many religions take against suicide and to the greater connectedness religion may give. [92] Muslims, among religious people, appear to have a lower rate of suicide however the data supporting this is not strong. [29] There does not appear to be a difference in rates of attempted suicide. [29] Young women in the Middle East may have higher rates. [94]

Substance misuse

Substance misuse is the second most common risk factor for suicide after major depression and bipolar disorder. [95] Both chronic substance misuse as well as acute intoxication are associated. [52] [96] When combined with personal grief, such as bereavement, the risk is further increased. [96] Substance misuse is also associated with mental health disorders. [52]

Most people are under the influence of sedative-hypnotic drugs (such as alcohol or benzodiazepines) when they die by suicide, [97] with alcoholism present in between 15% and 61% of cases. [52] Use of prescribed benzodiazepines is associated with an increased rate of attempted and completed suicide. The pro-suicidal effects of benzodiazepines are suspected to be due to a psychiatric disturbance caused by side effects, such as disinhibition, or withdrawal symptoms. [10] Countries that have higher rates of alcohol use and a greater density of bars generally also have higher rates of suicide. [98] About 2.2–3.4% of those who have been treated for alcoholism at some point in their life die by suicide. [98] Alcoholics who attempt suicide are usually male, older, and have tried to take their own lives in the past. [52] Between 3 and 35% of deaths among those who use heroin are due to suicide (approximately fourteenfold greater than those who do not use). [99] In adolescents who misuse alcohol, neurological and psychological dysfunctions may contribute to the increased risk of suicide. [100]

The misuse of cocaine and methamphetamine has a high correlation with suicide. [52] [101] In those who use cocaine, the risk is greatest during the withdrawal phase. [102] Those who used inhalants are also at significant risk with around 20% attempting suicide at some point and more than 65% considering it. [52] Smoking cigarettes is associated with risk of suicide. [103] There is little evidence as to why this association exists however, it has been hypothesized that those who are predisposed to smoking are also predisposed to suicide, that smoking causes health problems which subsequently make people want to end their life, and that smoking affects brain chemistry causing a propensity for suicide. [103] Cannabis, however, does not appear to independently increase the risk. [52]

Medical conditions

There is an association between suicidality and physical health problems such as [75] chronic pain, [104] traumatic brain injury, [105] cancer, [106] chronic fatigue syndrome, kidney failure (requiring hemodialysis), HIV, and systemic lupus erythematosus. [75] The diagnosis of cancer approximately doubles the subsequent frequency of suicide. [106] The prevalence of increased suicidality persisted after adjusting for depressive illness and excessive alcohol use. Among people with more than one medical condition the frequency was particularly high. In Japan, health problems are listed as the primary justification for suicide. [107]

Sleep disturbances, such as insomnia [108] and sleep apnea, are risk factors for depression and suicide. In some instances, the sleep disturbances may be a risk factor independent of depression. [109] A number of other medical conditions may present with symptoms similar to mood disorders, including hypothyroidism, Alzheimer's, brain tumors, systemic lupus erythematosus, and adverse effects from a number of medications (such as beta blockers and steroids). [22]


The media, including the Internet, plays an important role. [50] [80] Certain depictions of suicide may increase its occurrence, with high-volume, prominent, repetitive coverage glorifying or romanticizing suicide having the most impact. [110] When detailed descriptions of how to kill oneself by a specific means are portrayed, this method of suicide can be imitated in vulnerable people. [16] This phenomenon has been observed in several cases after press coverage. [111] [112] In a bid to reduce the adverse effect of media portrayals concerning suicide report, one of the effective methods is to educate journalists on how to report suicide news in a manner that might reduce that possibility of imitation and encourage those at risk to seek for help. When journalists follow certain reporting guidelines the risk of suicides can be decreased. [110] Getting buy-in from the media industry, however, can be difficult, especially in the long term. [110]

This trigger of suicide contagion or copycat suicide is known as the "Werther effect", named after the protagonist in Goethe's The Sorrows of Young Werther who killed himself and then was emulated by many admirers of the book. [113] This risk is greater in adolescents who may romanticize death. [114] It appears that while news media has a significant effect that of the entertainment media is equivocal. [115] [116] It is unclear if searching for information about suicide on the Internet relates to the risk of suicide. [117] The opposite of the Werther effect is the proposed "Papageno effect", in which coverage of effective coping mechanisms may have a protective effect. The term is based upon a character in Mozart's opera The Magic Flute—fearing the loss of a loved one, he had planned to kill himself until his friends helped him out. [113] As a consequence, fictional portrayals of suicide, showing alternative consequences or negative consequences, might have a preventive effect, [118] for instance fiction might normalize mental health problems and encourage help-seeking. [119]

Other factors

Trauma is a risk factor for suicidality in both children [120] and adults. [80] Some may take their own lives to escape bullying or prejudice. [121] A history of childhood sexual abuse [122] and time spent in foster care are also risk factors. [123] Sexual abuse is believed to contribute to approximately 20% of the overall risk. [59] Significant adversity early in life has a negative effect on problem-solving skills and memory, both of which are implicated in suicidality. [36]

Problem gambling is associated with increased suicidal ideation and attempts compared to the general population. [124] Between 12 and 24% pathological gamblers attempt suicide. [125] The rate of suicide among their spouses is three times greater than that of the general population. [125] Other factors that increase the risk in problem gamblers include concomitant mental illness, alcohol, and drug misuse. [126]

Genetics might influence rates of completed suicides. A family history of suicide, especially in the mother, affects children more than adolescents or adults. [80] Adoption studies have shown that this is the case for biological relatives, but not adopted relatives. This makes familial risk factors unlikely to be due to imitation. [36] Once mental disorders are accounted for, the estimated heritability rate is 36% for suicidal ideation and 17% for suicide attempts. [36] An evolutionary explanation for suicide is that it may improve inclusive fitness. This may occur if the person dying by suicide cannot have more children and takes resources away from relatives by staying alive. An objection is that deaths by healthy adolescents likely does not increase inclusive fitness. Adaptation to a very different ancestral environment may be maladaptive in the current one. [82] [127]

Infection by the parasite Toxoplasma gondii, more commonly known as toxoplasmosis, has been linked with suicide risk. One explanation states that this is caused by altered neurotransmitter activity due to the immunological response. [36]

There appears to be a link between air pollution and depression and suicide. [128]


Rational suicide is the reasoned taking of one's own life. [129] However, some consider suicide as never being rational. [129]

Euthanasia and assisted suicide are accepted practices in a number of countries among those who have a poor quality of life without the possibility of getting better. [130] [131] They are supported by the legal arguments for a right to die. [131]

The act of taking one's life for the benefit of others is known as altruistic suicide. [132] An example of this is an elder ending his or her life to leave greater amounts of food for the younger people in the community. [132] Suicide in some Inuit cultures has been seen as an act of respect, courage, or wisdom. [133]

A suicide attack is a political or religious action where an attacker carries out violence against others which they understand will result in their own death. [134] Some suicide bombers are motivated by a desire to obtain martyrdoms or are religiously motivated. [58] Kamikaze missions were carried out as a duty to a higher cause or moral obligation. [133] Murder–suicide is an act of homicide followed within a week by suicide of the person who carried out the act. [135]

Mass suicides are often performed under social pressure where members give up autonomy to a leader. [136] Mass suicides can take place with as few as two people, often referred to as a suicide pact. [137] In extenuating situations where continuing to live would be intolerable, some people use suicide as a means of escape. [138] [139] Some inmates in Nazi concentration camps are known to have killed themselves during the Holocaust by deliberately touching the electrified fences. [140]

The leading method of suicide varies among countries. The leading methods in different regions include hanging, pesticide poisoning, and firearms. [17] These differences are believed to be in part due to availability of the different methods. [16] A review of 56 countries found that hanging was the most common method in most of the countries, [17] accounting for 53% of male suicides and 39% of female suicides. [142]

Worldwide, 30% of suicides are estimated to occur from pesticide poisoning, most of which occur in the developing world. [2] The use of this method varies markedly from 4% in Europe to more than 50% in the Pacific region. [143] It is also common in Latin America due to the ease of access within the farming populations. [16] In many countries, drug overdoses account for approximately 60% of suicides among women and 30% among men. [144] Many are unplanned and occur during an acute period of ambivalence. [16] The death rate varies by method: firearms 80–90%, drowning 65–80%, hanging 60–85%, jumping 35–60%, charcoal burning 40–50%, pesticides 60–75%, and medication overdose 1.5–4.0%. [16] The most common attempted methods of suicide differ from the most common methods of completion up to 85% of attempts are via drug overdose in the developed world. [75]

In China, the consumption of pesticides is the most common method. [145] In Japan, self-disembowelment known as seppuku (harakiri) still occurs [145] however, hanging and jumping are the most common. [146] Jumping to one's death is common in both Hong Kong and Singapore at 50% and 80% respectively. [16] In Switzerland, firearms are the most frequent suicide method in young males, however this method has decreased relatively since guns have become less common. [147] [148] In the United States, 50% of suicides involve the use of firearms, with this method being somewhat more common in men (56%) than women (31%). [149] The next most common cause was hanging in males (28%) and self-poisoning in females (31%). [149] Together, hanging and poisoning constituted about 42% of U.S. suicides (as of 2017 [update] ). [149]

There is no known unifying underlying pathophysiology for suicide. [22] It is however believed to result from an interplay of behavioral, socio-economic and psychological factors. [16]

Low levels of brain-derived neurotrophic factor (BDNF) are both directly associated with suicide [150] and indirectly associated through its role in major depression, posttraumatic stress disorder, schizophrenia and obsessive–compulsive disorder. [151] Post-mortem studies have found reduced levels of BDNF in the hippocampus and prefrontal cortex, in those with and without psychiatric conditions. [152] Serotonin, a brain neurotransmitter, is believed to be low in those who die by suicide. [153] This is partly based on evidence of increased levels of 5-HT2A receptors found after death. [154] Other evidence includes reduced levels of a breakdown product of serotonin, 5-hydroxyindoleacetic acid, in the cerebral spinal fluid. [155] Direct evidence is however hard to gather. [154] Epigenetics, the study of changes in genetic expression in response to environmental factors which do not alter the underlying DNA, is also believed to play a role in determining suicide risk. [156]

Suicide prevention is a term used for the collective efforts to reduce the incidence of suicide through preventive measures. Protective factors for suicide include support, and access to therapy. [51] About 60% of people with suicidal thoughts do not seek help. [157] Reasons for not doing so include low perceived need, and wanting to deal with the problem alone. [157] Despite these high rates, there are few established treatments available for suicidal behavior. [80]

Reducing access to certain methods, such as firearms or toxins such as opioids and pesticides, can reduce risk of suicide by that method. [16] [158] [15] [36] This may be in part because suicide is often an impulsive decision, with up to 70% of near-fatal suicide attempts made after less than one hour of deliberation—thus, reducing access to easily-accessible methods of suicide may make impulsive attempts less likely to succeed. [159] Other measures include reducing access to charcoal (for burning) and adding barriers on bridges and subway platforms. [16] [160] [15] Treatment of drug and alcohol addiction, depression, and those who have attempted suicide in the past, may also be effective. [158] [15] Some have proposed reducing access to alcohol as a preventive strategy (such as reducing the number of bars). [52]

In young adults who have recently thought about suicide, cognitive behavioral therapy appears to improve outcomes. [161] [80] School-based programs that increase mental health literacy and train staff have shown mixed results on suicide rates. [15] Economic development through its ability to reduce poverty may be able to decrease suicide rates. [86] Efforts to increase social connection, especially in elderly males, may be effective. [162] In people who have attempted suicide, following up on them might prevent repeat attempts. [163] Although crisis hotlines are common, there is little evidence to support or refute their effectiveness. [14] [15] Preventing childhood trauma provides an opportunity for suicide prevention. [120] The World Suicide Prevention Day is observed annually on September 10 with the support of the International Association for Suicide Prevention and the World Health Organization. [164]


There is little data on the effects of screening the general population on the ultimate rate of suicide. [165] [166] Screening those who come to the emergency departments with injuries from self-harm have been shown to help identify suicide ideation and suicide intention. Psychometric tests such as the Beck Depression Inventory or the Geriatric Depression Scale for older people are being used. [167] As there is a high rate of people who test positive via these tools that are not at risk of suicide, there are concerns that screening may significantly increase mental health care resource utilization. [168] Assessing those at high risk however is recommended. [22] Asking about suicidality does not appear to increase the risk. [22]

Mental illness

In those with mental health problems, a number of treatments may reduce the risk of suicide. Those who are actively suicidal may be admitted to psychiatric care either voluntarily or involuntarily. [22] Possessions that may be used to harm oneself are typically removed. [75] Some clinicians get patients to sign suicide prevention contracts where they agree to not harm themselves if released. [22] Evidence however does not support a significant effect from this practice. [22] If a person is at low risk, outpatient mental health treatment may be arranged. [75] Short-term hospitalization has not been found to be more effective than community care for improving outcomes in those with borderline personality disorder who are chronically suicidal. [169] [170]

There is tentative evidence that psychotherapy, specifically dialectical behaviour therapy, reduces suicidality in adolescents [171] as well as in those with borderline personality disorder. [172] It may also be useful in decreasing suicide attempts in adults at high risk. [173] Evidence however has not found a decrease in completed suicides. [171]

There is controversy around the benefit-versus-harm of antidepressants. [50] In young persons, some antidepressants, such as SSRIs, appear to increase the risk of suicidality from 25 per 1000 to 40 per 1000. [174] In older persons, however, they may decrease the risk. [22] Lithium appears effective at lowering the risk in those with bipolar disorder and major depression to nearly the same levels as that of the general population. [175] [176] Clozapine may decrease the thoughts of suicide in some people with schizophrenia. [177] Ketamine, which is a dissociative anaesthetic, seems to lower the rate of suicidal ideation. [178] In the United States, health professionals are legally required to take reasonable steps to try to prevent suicide. [179] [180]

Approximately 0.5% to 1.4% of people die by suicide, a mortality rate of 11.6 per 100,000 persons per year. [6] [22] Suicide resulted in 842,000 deaths in 2013 up from 712,000 deaths in 1990. [19] Rates of suicide have increased by 60% from the 1960s to 2012, with these increases seen primarily in the developing world. [3] Globally, as of 2008 [update] /2009, suicide is the tenth leading cause of death. [3] For every suicide that results in death there are between 10 and 40 attempted suicides. [22]

Suicide rates differ significantly between countries and over time. [6] As a percentage of deaths in 2008 it was: Africa 0.5%, South-East Asia 1.9%, Americas 1.2% and Europe 1.4%. [6] Rates per 100,000 were: Australia 8.6, Canada 11.1, China 12.7, India 23.2, United Kingdom 7.6, United States 11.4 and South Korea 28.9. [181] [182] It was ranked as the 10th leading cause of death in the United States in 2016 with about 45,000 cases that year. [183] Rates have increased in the United States in the last few years, [183] with the highest value being in 2017 (the most recent data). [184] In the United States, about 650,000 people are seen in emergency departments yearly due to attempting suicide. [22] The United States rate among men in their 50s rose by nearly half in the decade 1999–2010. [185] Greenland, Lithuania, Japan, and Hungary have the highest rates of suicide. [6] Around 75% of suicides occur in the developing world. [2] The countries with the greatest absolute numbers of suicides are China and India, partly due to their large population size, accounting for over half the total. [6] In China, suicide is the 5th leading cause of death. [186]

Death rate from suicide per 100,000 as of 2017 [187]

Share of deaths from suicide, 2017 [188]

Sex and gender

Globally as of 2012 [update] , death by suicide occurs about 1.8 times more often in males than females. [6] [189] In the Western world, males die three to four times more often by means of suicide than do females. [6] This difference is even more pronounced in those over the age of 65, with tenfold more males than females dying by suicide. [190] Suicide attempts and self-harm are between two and four times more frequent among females. [22] [191] [192] Researchers have attributed the difference between attempted and completed suicides among the sexes to males using more lethal means to end their lives. [190] [193] [194] However, separating intentional suicide attempts from non-suicidal self-harm is not currently done in places like the United States when gathering statistics at the national level. [195]

China has one of the highest female suicide rates in the world and is the only country where it is higher than that of men (ratio of 0.9). [6] [186] In the Eastern Mediterranean, suicide rates are nearly equivalent between males and females. [6] The highest rate of female suicide is found in South Korea at 22 per 100,000, with high rates in South-East Asia and the Western Pacific generally. [6]

A number of reviews have found an increased risk of suicide among transgender, lesbian, gay, and bisexual people. [196] [197] Among transgender persons, rates of attempted suicide are about 40% compared to a general population rate of 5%. [198] [199] This is believed to in part be due to social stigmatisation. [200]

In many countries, the rate of suicide is highest in the middle-aged [202] or elderly. [16] The absolute number of suicides however is greatest in those between 15 and 29 years old, due to the number of people in this age group. [6] Worldwide, the average age of suicide is between age 30 and 49 for both men and women. [203] This means that half of people who died by suicide were approximately age 40 or younger, and half were older. [203] Suicidality is rare in children, but increases during the transition to adolescence. [204]

In the United States, the suicide death rate is greatest in Caucasian men older than 80 years, even though younger people more frequently attempt suicide. [22] It is the second most common cause of death in adolescents [50] and in young males is second only to accidental death. [202] In young males in the developed world, it is the cause of nearly 30% of mortality. [202] In the developing world rates are similar, but it makes up a smaller proportion of overall deaths due to higher rates of death from other types of trauma. [202] In South-East Asia, in contrast to other areas of the world, deaths from suicide occur at a greater rate in young females than elderly females. [6]

In ancient Athens, a person who died by suicide without the approval of the state was denied the honors of a normal burial. The person would be buried alone, on the outskirts of the city, without a headstone or marker. [205] However, it was deemed to be an acceptable method to deal with military defeat. [206] In Ancient Rome, while suicide was initially permitted, it was later deemed a crime against the state due to its economic costs. [207] Aristotle condemned all forms of suicide while Plato was ambivalent. [208] In Rome, some reasons for suicide included volunteering death in a gladiator combat, guilt over murdering someone, to save the life of another, as a result of mourning, from shame from being raped, and as an escape from intolerable situations like physical suffering, military defeat, or criminal pursuit. [208]

Suicide came to be regarded as a sin in Christian Europe and was condemned at the Council of Arles (452) as the work of the Devil. In the Middle Ages, the Church had drawn-out discussions as to when the desire for martyrdom was suicidal, as in the case of martyrs of Córdoba. Despite these disputes and occasional official rulings, Catholic doctrine was not entirely settled on the subject of suicide until the later 17th century. A criminal ordinance issued by Louis XIV of France in 1670 was extremely severe, even for the times: the dead person's body was drawn through the streets, face down, and then hung or thrown on a garbage heap. Additionally, all of the person's property was confiscated. [209] [210]

Attitudes towards suicide slowly began to shift during the Renaissance. John Donne's work Biathanatos contained one of the first modern defences of suicide, bringing proof from the conduct of Biblical figures, such as Jesus, Samson and Saul, and presenting arguments on grounds of reason and nature to sanction suicide in certain circumstances. [211]

The secularization of society that began during the Enlightenment questioned traditional religious attitudes (such as Christian views on suicide) toward suicide and brought a more modern perspective to the issue. David Hume denied that suicide was a crime as it affected no one and was potentially to the advantage of the individual. In his 1777 Essays on Suicide and the Immortality of the Soul he rhetorically asked, "Why should I prolong a miserable existence, because of some frivolous advantage which the public may perhaps receive from me?" [211] Hume's analysis was criticized by philosopher Philip Reed as being "uncharacteristically (for him) bad", since Hume took an unusually narrow conception of duty and his conclusion depended upon the suicide producing no harm to others – including causing no grief, feelings of guilt, or emotional pain to any surviving friends and family – which is almost never the case. [212] A shift in public opinion at large can also be discerned The Times in 1786 initiated a spirited debate on the motion "Is suicide an act of courage?". [213]

By the 19th century, the act of suicide had shifted from being viewed as caused by sin to being caused by insanity in Europe. [210] Although suicide remained illegal during this period, it increasingly became the target of satirical comments, such as the Gilbert and Sullivan comic opera The Mikado, that satirized the idea of executing someone who had already killed himself.

By 1879, English law began to distinguish between suicide and homicide, although suicide still resulted in forfeiture of estate. [214] In 1882, the deceased were permitted daylight burial in England [215] and by the middle of the 20th century, suicide had become legal in much of the Western world. The term suicide first emerged shortly before 1700 to replace expressions on self-death which were often characterized as a form of self-murder in the West. [208]


No country in Europe currently considers suicide or attempted suicide to be a crime. [216] It was, however, in most Western European countries from the Middle Ages until at least the 1800s. [214] The Netherlands was the first country to legalize both physician-assisted suicide and euthanasia, which took effect in 2002, although only doctors are allowed to assist in either of them, and have to follow a protocol prescribed by Dutch law. [217] If such protocol is not followed, it is an offence punishable by law. In Germany, active euthanasia is illegal and anyone present during suicide may be prosecuted for failure to render aid in an emergency. [218] Switzerland has taken steps to legalize assisted suicide for the chronically mentally ill. The high court in Lausanne, Switzerland, in a 2006 ruling, granted an anonymous individual with longstanding psychiatric difficulties the right to end his own life. [219] England and Wales decriminalized suicide via the Suicide Act 1961 and the Republic of Ireland in 1993. [216] The word "commit" was used in reference to its being illegal, however many organisations have stopped it because of the negative connotation. [220] [221]

In the United States, suicide is not illegal but may be associated with penalties for those who attempt it. [216] Physician-assisted suicide is legal in the state of Washington for people with terminal diseases. [222] In Oregon, people with terminal diseases may request medications to help end their life. [223] Canadians who have attempted suicide may be barred from entering the United States. U.S. laws allow border guards to deny access to people who have a mental illness, including those with previous suicide attempts. [224] [225]

In Australia, suicide is not a crime. [226] It however is a crime to counsel, incite, or aid and abet another in attempting to die by suicide, and the law explicitly allows any person to use "such force as may reasonably be necessary" to prevent another from taking their own life. [227] The Northern Territory of Australia briefly had legal physician-assisted suicide from 1996 to 1997. [228]

In India, suicide used to be illegal and surviving family could face legal difficulties. [229] The Indian government repealed this law in 2014. [230] It remains a criminal offense in most Muslim-majority nations. [29]

Religious views

Most forms of Christianity consider suicide sinful, based mainly on the writings of influential Christian thinkers of the Middle Ages, such as St. Augustine and St. Thomas Aquinas, but suicide was not considered a sin under the Byzantine Christian code of Justinian, for instance. [231] [232] In Catholic doctrine, the argument is based on the commandment "Thou shalt not kill" (made applicable under the New Covenant by Jesus in the Gospel of Matthew [233] ), as well as the idea that life is a gift given by God which should not be spurned, and that suicide is against the "natural order" and thus interferes with God's master plan for the world. [234] However, it is believed that mental illness or grave fear of suffering diminishes the responsibility of the one completing suicide. [235]

Judaism focuses on the importance of valuing this life, and as such, suicide is tantamount to denying God's goodness in the world. Despite this, under extreme circumstances when there has seemed no choice but to either be killed or forced to betray their religion, there are several accounts of Jews having died by suicide, either individually or in groups (see Holocaust, Masada, First French persecution of the Jews, and York Castle for examples) and as a grim reminder there is even a prayer in the Jewish liturgy for "when the knife is at the throat", for those dying "to sanctify God's Name" (see Martyrdom). These acts have received mixed responses by Jewish authorities, regarded by some as examples of heroic martyrdom, while others state that it was wrong for them to take their own lives in anticipation of martyrdom. [236]

Islamic religious views are against suicide. [29] The Quran forbids it by stating "do not kill or destroy yourself". [237] The hadiths also state individual suicide to be unlawful and a sin. [29] Stigma is often associated with suicide in Islamic countries. [237]

In Hinduism, suicide is generally frowned upon and is considered equally sinful as murdering another in contemporary Hindu society. Hindu Scriptures state that one who dies by suicide will become part of the spirit world, wandering earth until the time one would have otherwise died, had one not taken one's own life. [238] However, Hinduism accepts a man's right to end one's life through the non-violent practice of fasting to death, termed Prayopavesa [239] but Prayopavesa is strictly restricted to people who have no desire or ambition left, and no responsibilities remaining in this life. [239] Jainism has a similar practice named Santhara. Sati, or self-immolation by widows, is a rare and illegal practice in Hindu society. [240]

Within the Ainu religion, someone who dies by suicide is believed to become a ghost (tukap) who would haunt the living, [241] to come to fulfillment from which they were excluded during life. [242] Also, someone who insults another so they kill themselves is regarded as co-responsible for their death. [243] According to Norbert Richard Adami, this ethics exists due to the case that solidarity within the community is much more important to Ainu culture than it is to the Western world. [243]


A number of questions are raised within the philosophy of suicide, including what constitutes suicide, whether or not suicide can be a rational choice, and the moral permissibility of suicide. [244] Arguments as to acceptability of suicide in moral or social terms range from the position that the act is inherently immoral and unacceptable under any circumstances, to a regard for suicide as a sacrosanct right of anyone who believes they have rationally and conscientiously come to the decision to end their own lives, even if they are young and healthy.

Opponents to suicide include philosophers such as Augustine of Hippo, Thomas Aquinas, [244] Immanuel Kant [245] and, arguably, John Stuart Mill – Mill's focus on the importance of liberty and autonomy meant that he rejected choices which would prevent a person from making future autonomous decisions. [246] Others view suicide as a legitimate matter of personal choice. Supporters of this position maintain that no one should be forced to suffer against their will, particularly from conditions such as incurable disease, mental illness, and old age, with no possibility of improvement. They reject the belief that suicide is always irrational, arguing instead that it can be a valid last resort for those enduring major pain or trauma. [247] A stronger stance would argue that people should be allowed to autonomously choose to die regardless of whether they are suffering. Notable supporters of this school of thought include Scottish empiricist David Hume, [244] who accepted suicide so long as it did not harm or violate a duty to God, other people, or the self, [212] and American bioethicist Jacob Appel. [219] [248]


Advocacy of suicide has occurred in many cultures and subcultures. The Japanese military during World War II encouraged and glorified kamikaze attacks, which were suicide attacks by military aviators from the Empire of Japan against Allied naval vessels in the closing stages of the Pacific Theater of World War II. Japanese society as a whole has been described as "suicide-tolerant" [250] (see Suicide in Japan).

Internet searches for information on suicide return webpages that 10–30% of the time encourage or facilitate suicide attempts. There is some concern that such sites may push those predisposed over the edge. Some people form suicide pacts online, either with pre-existing friends or people they have recently encountered in chat rooms or message boards. The Internet, however, may also help prevent suicide by providing a social group for those who are isolated. [251]


Some landmarks have become known for high levels of suicide attempts. [252] These include China's Nanjing Yangtze River Bridge, [253] San Francisco's Golden Gate Bridge, Japan's Aokigahara Forest, [254] England's Beachy Head, [252] and Toronto's Bloor Street Viaduct. [255] As of 2010 [update] , the Golden Gate Bridge has had more than 1,300 die by suicide by jumping since its construction in 1937. [256] Many locations where suicide is common have constructed barriers to prevent it [257] this includes the Luminous Veil in Toronto, [255] the Eiffel Tower in Paris, the West Gate Bridge in Melbourne, and Empire State Building in New York City. [257] They generally appear to be effective. [258]

Notable cases

An example of mass suicide is the 1978 Jonestown mass murder/suicide in which 909 members of the Peoples Temple, an American new religious movement led by Jim Jones, ended their lives by drinking grape Flavor Aid laced with cyanide and various prescription drugs. [259] [260] [261]

Thousands of Japanese civilians took their own lives in the last days of the Battle of Saipan in 1944, some jumping from "Suicide Cliff" and "Banzai Cliff". [262] The 1981 Irish hunger strikes, led by Bobby Sands, resulted in 10 deaths. The cause of death was recorded by the coroner as "starvation, self-imposed" rather than suicide this was modified to simply "starvation" on the death certificates after protest from the dead strikers' families. [263] During World War II, Erwin Rommel was found to have foreknowledge of the July 20 plot on Hitler's life he was threatened with public trial, execution, and reprisals on his family unless he took his own life. [264]

As suicide requires a willful attempt to die, some feel it therefore cannot be said to occur in non-human animals. [206] Suicidal behavior has been observed in Salmonella seeking to overcome competing bacteria by triggering an immune system response against them. [265] Suicidal defenses by workers are also noted in the Brazilian ant Forelius pusillus, where a small group of ants leaves the security of the nest after sealing the entrance from the outside each evening. [266]

Pea aphids, when threatened by a ladybug, can explode themselves, scattering and protecting their brethren and sometimes even killing the ladybug this form of suicidal altruism is known as autothysis. [267] Some species of termites (for example Globitermes sulphureus [268] ) have soldiers that explode, covering their enemies with sticky goo. [269] [268]

There have been anecdotal reports of dogs, horses, and dolphins killing themselves, [270] but little scientific study of animal suicide. [271] Animal suicide is usually put down to romantic human interpretation and is not generally thought to be intentional. Some of the reasons animals are thought to unintentionally kill themselves include: psychological stress, infection by certain parasites or fungi, or disruption of a long-held social tie, such as the ending of a long association with an owner and thus not accepting food from another individual. [272]

Dire Consequences of Suicide in the European Middle Ages

The cycle of life and death is an eternal and unchanging truth of human history. Yet, the attitudes around both are influenced and shaped by a number of factors. Today, dying of old age is seen as a graceful extension of the natural cycle of life and death, but dying early, either through suicide or euthanasia, has a different set of attitudes attached to it. Modern attitudes about suicide actually emerged from medieval socio-cultural and religious beliefs. Suicide, or self-murder, was only mentioned in official records at the turn of the millennium, from 1000 AD onwards.

Today, the conversation around suicide has acquired greater degrees of empathy, as seen through the prism of psycho-social and mental wellbeing (particularly, the absence of it). Yet, Australian religious scholar Professor Carole M. Cusack&rsquos research indicates that it was religion that controlled this &ldquomedieval&rdquo attitude towards suicide. Criminal justice systems, even secular ones, were influenced by theology, and followed soon thereafter in medieval Europe.

Historical and cultural importance of canon law

Canon law has functioned in different historical periods in the organization of the church’s liturgy, preaching, works of charity, and other activities through which Christianity was established and spread in the Mediterranean area and beyond. Canon law, moreover, had an essential role in the transmission of Greek and Roman jurisprudence and in the reception of Justinian law (Roman law as codified under the sponsorship of the Byzantine emperor Justinian in the 6th century) in Europe during the Middle Ages. Thus it is that the history of the Middle Ages, to the extent that they were dominated by ecclesiastical concerns, cannot be written without knowledge of the ecclesiastical institutions that were governed according to canon law. Medieval canon law also had a lasting influence on the law of the Protestant churches. Numerous institutions and concepts of canon law have influenced the secular law and jurisprudence in lands influenced by Protestantism—e.g., marriage law, the law of obligations, the doctrine of modes of property acquisition, possession, wills, legal persons, the law of criminal procedure, and the law concerning proof or evidence. International law owes its very origin to canonists and theologians, and the modern idea of the state goes back to the ideas developed by medieval canonists regarding the constitution of the church. The history of the legal principles of the relation of sacerdotium to imperium—i.e., of ecclesiastical to secular authority or of church to state—is a central factor in European history.


After the decline of the Western Roman Empire, investiture was performed by members of the ruling nobility (and was known as lay investure) despite theoretically being a task of the church. [3] Many bishops and abbots were themselves usually part of the ruling nobility. Given that most members of the European nobility practiced primogeniture, and willed their titles of nobility to the eldest surviving male heir, surplus male siblings often sought careers in the upper levels of the church hierarchy. This was particularly true where the family may have established a proprietary church or abbey on their estate. [ citation needed ] Since a substantial amount of wealth and land was usually associated with the office of a bishop or abbot, the sale of church offices—a practice known as "simony"—was an important source of income for leaders among the nobility, who themselves owned the land and by charity allowed the building of churches. [ citation needed ] Emperors had been heavily relying on bishops for their secular administration, as they were not hereditary or quasi-hereditary nobility with family interests. [ citation needed ] They justified their power by the theory of the divine right of kings.

Many of the papal selections before 1059 were influenced politically and militarily by European powers, often with a king or emperor announcing a choice which would be rubber-stamped by church electors. The Holy Roman Emperors of Ottonian dynasty believed they should have the power to appoint the pope. Since the ascendance of the first of that line, Otto the Great (936–72), the bishops had been princes of the empire, had secured many privileges, and had become to a great extent feudal lords over great districts of the imperial territory. The control of these great units of economic and military power was for the king a question of primary importance due to its effect on imperial authority. [4] It was essential for a ruler or nobleman to appoint (or sell the office to) someone who would remain loyal. [3]

Problems with simony became particularly unpopular as Pope Benedict IX was accused of selling the papacy in 1045. Henry III, Holy Roman Emperor, reigning from 1046 to 1056, settled the papal schism and named several popes, the last emperor to successfully dominate the selection process. Six-year-old Henry IV became King of the Germans in 1056.

Benedict X was elected under the influence of the Count of Tusculum, allegedly by bribing the electors. Dissenting cardinals elected Pope Nicholas II in 1058 at Siena. Nicholas II successfully waged war against Benedict X and regained control of the Vatican. Nicholas II convened a synod in the Lateran on Easter in 1059. The results were codified in the papal bull In nomine Domini. It declared that leaders of the nobility would have no part in the selection of popes (though the Holy Roman Emperor might confirm the choice) and that electors would be cardinals (which would later evolve into the College of Cardinals) assembled in Rome. The bull also banned lay investiture. In response, all the bishops in Germany (who supported the Emperor) assembled in 1061 and declared all the decrees of Nicolas II null and void. Nevertheless, the elections of Pope Alexander II and Pope Gregory VII proceeded according to church rules, without the involvement of the Emperor.

In 1075, Pope Gregory VII, composed the Dictatus papae, though this was not published at the time, cataloging principles of his Gregorian Reforms. One clause asserted that the deposal of an emperor was under the sole power of the pope. [5] It declared that the Roman church was founded by God alone – that the papal power (the auctoritas of Pope Gelasius) was the sole universal power in particular, a council held in the Lateran Palace from 24 to 28 February the same year decreed that the pope alone could appoint or depose churchmen or move them from see to see. [6] By this time, Henry IV was no longer a child, and he continued to appoint his own bishops. [5] He reacted to this declaration by sending Gregory VII a letter in which he withdrew his imperial support of Gregory as pope in no uncertain terms: the letter was headed "Henry, king not through usurpation but through the holy ordination of God, to Hildebrand, at present not pope but false monk". [7] It called for the election of a new pope. His letter ends, "I, Henry, king by the grace of God, with all of my Bishops, say to you, come down, come down!", and is often quoted with "and to be damned throughout the ages", which is a later addition. [8]

The situation was made even more dire when Henry IV installed his chaplain, Tedald, a Milanese priest, as Bishop of Milan, when another priest of Milan, Atto, had already been chosen in Rome by the pope for candidacy. [9] In 1076 Gregory responded by excommunicating Henry, and deposed him as German king, [10] releasing all Christians from their oath of allegiance. [11]

Enforcing these declarations was a different matter, but the advantage gradually came to be on the side of Gregory VII. German princes and the aristocracy were happy to hear of the king's deposition. They used religious reasons to continue the rebellion started at the First Battle of Langensalza in 1075, and for seizure of royal holdings. Aristocrats claimed local lordships over peasants and property, built forts, which had previously been outlawed, and built up localized fiefdoms to secure their autonomy from the empire. [5]

Thus, because of these combining factors, Henry IV had no choice but to back down, needing time to marshal his forces to fight the rebellion. In 1077, he traveled to Canossa in northern Italy, where the Pope was staying in the castle of Countess Matilda, to apologize in person. [12] The pope was suspicious of Henry's motives, and did not believe he was truly repentant. [ citation needed ] As penance for his sins, and echoing his own punishment of the Saxons after the First Battle of Langensalza, he wore a hair shirt and stood barefoot in the snow in what has become known as the Walk to Canossa. Gregory lifted the excommunication, but the German aristocrats, whose rebellion became known as the Great Saxon Revolt, were not as willing to give up their opportunity and elected a rival king, Rudolf von Rheinfeld. Three years later, Pope Gregory declared his support for von Rheinfeld and then on the Lenten synod of 7 March 1080 excommunicated Henry IV again. [13] In turn, Henry called a council of bishops at Brixen that proclaimed Gregory illegitimate. [14] The internal revolt against Henry effectively ended that same year, however, when Rudolf von Rheinfeld died. [ citation needed ]

Henry IV named Guibert of Ravenna (who he had invested as bishop of Ravenna) to be pope, referring to Clement III (known by the Catholic Church as Antipope Clement III) as "our pope". In 1081, Henry attacked Rome and besieged the city with the intent of forcibly removing Gregory VII and installing Clement III. The city of Rome withstood the siege, but the Vatican and St. Peters fell in 1083. On the outskirts of the city, Henry gained thirteen cardinals who became loyal to his cause. The next year the city of Rome surrendered and Henry triumphantly entered the city. On Palm Sunday, 1084, Henry IV solemnly enthroned Clement at St. Peter's Basilica on Easter Day, Clement returned the favour and crowned Henry IV as Emperor of the Holy Roman Empire.

Gregory VII was meanwhile still resisting a few hundred yards away from the basilica in the Castel Sant'Angelo, then known as the house of Cencius. [15] Gregory called on his allies for help, and Robert Guiscard (the Norman ruler of Sicily, Apulia, and Calabria) responded, entering Rome on 27 May 1084. [16] The Normans came in force and attacked with such strength that Henry and his army fled. Gregory VII was rescued however the ferocity of the attack ultimately resulted in the plundering of Rome for which the citizens of Rome blamed Gregory VII. As a result, Gregory VII was forced to leave Rome under the protection of the Normans. Gregory VII was taken to Salerno by the Normans where he grew ill and died on 25 May 1085. [17] The last words he uttered were, "I have loved justice and hated iniquity, and therefore I die in exile." [18]

Upon the death of Gregory, the cardinals elected a new pope, Pope Victor III. He owed his elevation to the influence of the Normans. Antipope Clement III still occupied St. Peter's. When Victor III died, the cardinals elected Pope Urban II (1088–99). He was one of three men Gregory VII suggested as his successor. Urban II preached the First Crusade, which united Western Europe, and more importantly, reconciled the majority of bishops who had abandoned Gregory VII. [18]

The reign of Henry IV showed the weakness of the German monarchy. The ruler was dependent upon the good will of the great men, the nobility of his land. These were technically royal officials and hereditary princes. He was also dependent on the resources of the churches. Henry IV alienated the Church of Rome and many of the magnates in his own kingdom. Many of these spent years in open or subversive rebellion. Henry failed to create a proper bureaucracy to replace his disobedient vassals. The magnates became increasingly independent, and the Church withdrew support. Henry IV spent the last years of his life desperately grasping to keep his throne. It was a greatly diminished kingdom. [19]

The Investiture Controversy continued for several decades as each successive pope tried to diminish imperial power by stirring up revolt in Germany. These revolts were gradually successful. The reign of Henry IV ended with a diminished kingdom and waning power. Many of his underlords had been in constant or desultory revolt for years. Henry IV's insistence that Antipope Clement III was the real pope had initially been popular with some of the nobles, and even many of the bishops of Germany. But as years passed, this support was slowly withdrawn. The idea that the German king could and should name the pope was increasingly discredited and viewed as an anachronism from a by-gone era. The Empire of the Ottos was virtually lost because of Henry IV. [ citation needed ]

On 31 December 1105, Henry IV was forced to abdicate by and was succeeded by his son Henry V, who had rebelled against his father in favor of the papacy, and who had made his father renounce the legality of his antipopes before he died. Nevertheless, Henry V chose another antipope, Gregory VIII.

Henry V realised swift action and a change in his father's policy was necessary. Pope Paschal II rebuked Henry V for appointing bishops in Germany. The king crossed the Alps with an army in 1111. The pope, who was weak and had few supporters was forced to suggest a compromise, the abortive Concordat of 1111. Its simple and radical solution [20] of the Investiture Controversy between the prerogatives of regnum and sacredoium proposed that German churchmen would surrender their lands and secular offices to the emperor and constitute a purely spiritual church. Henry gained greater control over the lands of his kingdom, especially those that had been in the hands of the church, but of contested title. He would not interfere with ecclesiastical affairs and churchmen would avoid secular services. The church would be given autonomy and to Henry V would be restored large parts of his empire that his father had lost. Henry V was crowned by Pope Paschal II as the legitimate Holy Roman Emperor. When the concessions of land were read in St. Peters, the crowd revolted in anger. Henry took the pope and cardinals hostage until the pope granted Henry V the right of investiture. Then he returned to Germany – crowned emperor and apparent victor over the papacy. [21]

The victory was as short-lived as that of his father, Henry IV over Gregory VII. The clergy urged Paschal to rescind his agreement, which he did in 1112. The quarrel followed the predictable course: Henry V rebelled and was excommunicated. Riots broke out in Germany, a new Antipope Gregory VIII was appointed by the German king, nobles loyal to Rome seceded from Henry. The civil war continued, just as under Henry IV. It dragged on for another ten years. Like his father before him, Henry V was faced with waning power. He had no choice but to give up investiture and the old right of naming the pope. The Concordat of Worms was the result. After the Concordat, the German kings never had the same control over the Church as had existed in the time of the Ottonian dynasty. [19] Henry V was received back into communion and recognized as legitimate emperor as a result.

Henry V died without heirs in 1125, three years after the Concordat. He had designated his nephew, Frederick von Staufen duke of Swabia, also known as Frederick II, Duke of Swabia as his successor. Instead, churchmen elected Lothair II. A long civil war erupted between the Staufen also known as Hohenstaufen supporters and the heirs of Lothar III. The result was the Hohenstaufen Frederick I 1152–1190 who came to power. [22]

At the time of Henry IV's death, Henry I of England and the Gregorian papacy were also embroiled in a controversy over investiture, and its solution provided a model for the eventual solution of the issue in the empire.

William the Conqueror had accepted a papal banner and the distant blessing of Pope Alexander II upon his invasion, but had successfully rebuffed the pope's assertion after the successful outcome, that he should come to Rome and pay homage for his fief, under the general provisions of the Donation of Constantine.

The ban on lay investiture in Dictatus papae did not shake the loyalty of William's bishops and abbots. In the reign of Henry I, the heat of exchanges between Westminster and Rome induced Anselm, Archbishop of Canterbury, to give up mediating and retire to an abbey. Robert of Meulan, one of Henry's chief advisors, was excommunicated, but the threat of excommunicating the king remained unplayed. The papacy needed the support of English Henry while German Henry was still unbroken. A projected crusade also required English support.

Henry I commissioned the Archbishop of York to collect and present all the relevant traditions of anointed kingship. On this topic, the historian Norman Cantor would note: "The resulting 'Anonymous of York' treaties are a delight to students of early-medieval political theory, but they in no way typify the outlook of the Anglo-Norman monarchy, which had substituted the secure foundation of administrative and legal bureaucracy for outmoded religious ideology." [23]

The Concordat of London, agreed in 1107, was a forerunner of a compromise that was later taken up in the Concordat of Worms. In England, as in Germany, the king's chancery started to distinguish between the secular and ecclesiastical powers of the prelates. Bowing to political reality and employing this distinction, Henry I of England gave up his right to invest his bishops and abbots while reserving the custom of requiring them to swear homage for the "temporalities" (the landed properties tied to the episcopate) directly from his hand, after the bishop had sworn homage and feudal vassalage in the commendation ceremony (commendatio), like any secular vassal. [24] The system of vassalage was not divided among great local lords in England as it was in France, since the king was in control by right of the conquest.

Later developments in England Edit

Henry I of England perceived a danger in placing monastic scholars in his chancery and turned increasingly to secular clerks, some of whom held minor positions in the Church. He often rewarded these men with the titles of bishop and abbot. Henry I expanded the system of scutage to reduce the monarchy's dependence on knights supplied from church lands. Unlike the situation in Germany, Henry I of England used the Investiture Controversy to strengthen the secular power of the king. It would continue to boil under the surface. The controversy would surface in the Thomas Becket affair under Henry II of England, the Great Charter of 1217, the Statutes of Mortmain and the battles over Cestui que use under Henry VII of England, and finally come to a head under Henry VIII of England.

The European mainland experienced about 50 years of fighting, with efforts by Lamberto Scannabecchi, the future Pope Honorius II, and the 1121 Diet of Würzburg to end the conflict. On September 23, 1122, near the German city of Worms, Pope Callixtus II and Holy Roman Emperor Henry V entered into an agreement, now known as the Concordat of Worms, that effectively ended the Investiture Controversy. It eliminated lay investiture, while allowing secular leaders some room for unofficial but significant influence in the appointment process.

By the terms of the agreement, the election of bishops and abbots in Germany was to take place in the emperor's presence (or his legate's) as judge ("without violence") between potentially disputing parties, free of bribes, thus retaining to the emperor a crucial role in choosing these great territorial magnates of the Empire. But absent a dispute, the canons of the cathedral were to elect the bishop, monks were to choose the abbot. Beyond the borders of Germany, in Burgundy and Italy, the election would be handled by the church without imperial interference. [ citation needed ]

Callixtus' reference to the feudal homage due the emperor on appointment is guarded: "shall do unto thee for these what he rightfully should" was the wording of the privilegium granted by Callixtus. The emperor's right to a substantial imbursement (payment) on the election of a bishop or abbot was specifically denied.

The emperor renounced the right to invest ecclesiastics with ring and crosier, [ citation needed ] the symbols of their spiritual power, and guaranteed election by the canons of cathedral or abbey and free consecration. To make up for this and symbolise the worldly authority of the bishop which the pope had always recognised to derive from the Emperor, another symbol, the scepter, was invented, which would be handed over by the king (or his legate). [ citation needed ]

The two ended by promising mutual aid when requested and by granting one another peace. The Concordat was confirmed by the First Council of the Lateran in 1123.

Terminology Edit

In modern terminology, a concordat is an international convention, specifically one concluded between the Holy See and the civil power of a country to define the relationship between the Catholic Church and the state in matters in which both are concerned. Concordats began during the First Crusade's end in 1098. [25]

The Concordat of Worms (Latin: Concordatum Wormatiense) [26] is sometimes called the Pactum Callixtinum by papal historians, since the term "concordat" was not in use until Nicolas of Cusa's De concordantia catholica of 1434. [a]

Local authority Edit

In the long term, the decline of imperial power would divide Germany until the 19th century. Similarly, in Italy, the investiture controversy weakened the emperor's authority and strengthened local separatists. [28]

While the monarchy was embroiled in the dispute with the Church, its power declined, and the localized rights of lordship over peasants increased, which eventually led to: [ citation needed ]

  • Increased serfdom that reduced rights for the majority
  • Local taxes and levies increased, while royal coffers declined
  • Localized rights of justice where courts did not have to answer to royal authority

Selection of leaders Edit

The papacy grew stronger. Marshalling for public opinion engaged lay people in religious affairs increasing lay piety, setting the stage for the Crusades and the great religious vitality of the 12th century. [ citation needed ]

German kings still had de facto influence over the selection of German bishops, though over time, German princes gained influence among church electors. The bishop-elect would then by invested by the Emperor (or representative) with the scepter and, sometime afterwards, by his ecclesial superior with ring and staff. The resolution of the Controversy produced a significant improvement in the character of men raised to the episcopacy. Kings no longer interfered so frequently in their election, and when they did, they generally nominated more worthy candidates for the office. [29]

The Concordat of Worms did not end the interference of European monarchs in the selection of the pope. Practically speaking, the German king [ which? ] retained a decisive voice in the selection of the hierarchy. All kings supported King John of England's defiance of Pope Innocent III ninety years after the Concordat of Worms in the matter concerning Stephen Langton. In theory, the pope named his bishops and cardinals. In reality, more often than not, Rome consecrated the clergy once it was notified by the kings who the incumbent would be. Recalcitrance by Rome would lead to problems in the kingdom. For the most part it was a no-win situation for Rome. In this, the Concordat of Worms changed little. The growth of canon law in the Ecclesiastical Courts was based on the underlying Roman law and increased the strength of the Roman Pontiff. [30]

Disputes between popes and Holy Roman Emperors continued until northern Italy was lost to the empire entirely, after the wars of the Guelphs and Ghibellines. Emperor Otto IV marched on Rome and commanded Pope Innocent III to annul the Concordat of Worms and to recognise the imperial crown's right to make nominations to all vacant benefices. [31] The church would crusade against the Holy Roman Empire under Frederick II. As historian Norman Cantor put it, the controversy "shattered the early-medieval equilibrium and ended the interpenetration of ecclesia and mundus". Indeed, medieval emperors, which were "largely the creation of ecclesiastical ideals and personnel", were forced to develop a secular bureaucratic state, whose essential components persisted in the Anglo-Norman monarchy. [32]

Kings continued to attempt to control either the direct leadership of the church, or indirectly through political means for centuries. This is seen most clearly in the Avignon Papacy when the popes moved from Rome to Avignon. The conflict in Germany and northern Italy arguably left the culture ripe for various Protestant sects, such as the Cathars, the Waldensians and ultimately Jan Hus and Martin Luther.

Authority and reform Edit

Though the Holy Roman Emperor retained some power over imperial churches, his power was damaged irreparably because he lost the religious authority that previously belonged to the office of the king. In France, England, and the Christian state in Spain, the king could overcome rebellions of his magnates and establish the power of his royal demesne because he could rely on the Church, which, for several centuries, had given him a mystical authority. From time to time, rebellious and recalcitrant monarchs might run afoul of the Church. These could be excommunicated, and after an appropriate time and public penance, be received back into the communion and good graces of the Church. [33]

Of the three reforms Gregory VII and his predecessors and successor popes had attempted, they had been most successful in regard to celibacy of the clergy. Simony had been partially checked. Against lay investiture they won only a limited success, and one that seemed less impressive as the years passed. During the time following the Concordat of Worms, the Church gained in both stature and power. [34]

The wording of the Concordat of Worms was ambiguous, skirted some issues and avoided others all together. This has caused some scholars to conclude that the settlement turned its back on Gregory VII's and Urban II's genuine hopes for reform. The emperor's influence in episcopal was preserved, and he could decide disputed elections. If the compromise was a rebuke to the most radical vision of the liberty of the Church, on at least one point its implication was firm and unmistakable: the king, even an emperor, was a layman, and his power at least morally limited (hence, totalitarianism was unacceptable). According to the opinion of W. Jordan, the divine right of kings was dealt a blow from which it never completely recovered, [35] yet unfettered authority and Caesaropapism was not something the later Mediaevals and Early Moderns understood by the phrase "by the grace of God" (which many of them ardently defended). If anything, a blow was dealt to subconsciously remaining pre-Christian Germanic feelings of "royal hail". [ clarification needed ]

Unifications of Germany and Italy Edit

It was the consequence of this lengthy episode that a whole generation grew up in Germany and Northern Italy in an atmosphere of war, doubt and scepticism. The papal backers had been busy propounding arguments to show that royal power was not of divine origin. They had been so successful that the moral authority of the Emperor had been undermined in the minds of many of his subjects. Serious divisions existed from this battle over the Investiture Controversy, which fractured large portions of the Holy Roman Empire in Germany and Italy. Davis argues these rifts were so deep and lasting that neither Germany nor Italy were able to form a cohesive nation state until the 19th century. A similar situation arose from the French revolution, which caused fractures in France that still exist. [36] The effect of Henry IV's excommunication, and his subsequent refusal to repent left a turbulence in central Europe that lasted throughout the Middle Ages. It may have been emblematic of certain German attitudes toward religion in general, and the perceived relevance of the German Emperor in the universal scheme of things. [ citation needed ]

German culture Edit

The catastrophic political consequences of the struggle between pope and emperor also led to a cultural disaster. Germany lost intellectual leadership in western Europe. In 1050, German monasteries were great centres of learning and art and German schools of theology and canon law were unsurpassed and probably unmatched anywhere in Europe. The long civil war over investiture sapped the energy of both German churchmen and intellectuals. They fell behind advances in philosophy, law, literature and art taking place in France and Italy. In many ways, Germany never caught up during the rest of the Middle Ages. [37] Universities were established in France, Italy, Spain and England by the early 13th century. Notable are University of Bologna, 1088, University of Salamanca, 1134, University of Paris, 1150, Oxford University, 1167 and University of Cambridge, 1207. The first German university, the Heidelberg University was not established until 1386. It was immediately steeped in mediaeval nominalism and early Protestantism.

Development of liberty and prosperity in northern Europe Edit

The political scientist Bruce Bueno de Mesquita argues that the Concordat of Worms contained within itself the germ of nation-based sovereignty that would one day be confirmed in the Peace of Westphalia (1648). The Concordat of Worms created an incentive structure for the rulers of the Catholic parts of Europe such that in the northern regions, local rulers were motivated to raise the prosperity and liberty of their subjects because such reforms helped those rulers assert their independence from the pope. [38]

With the Concordat of Worms, the pope became the de facto selector of bishops, as his recommendations all but guaranteed a candidate's nomination. Instead of myriad local customs, it all came down to negotiations between the pope and the local secular ruler. Therefore, the influence of the pope in the region became the common deciding factor across the Catholic parts of Europe.

As a consequence of the Concordat, if the local ruler rejected the pope's nominee for bishop, the ruler could keep the revenue of the diocese for himself, but the pope could retaliate in various ways, such as: ordering the local priests to not perform certain sacraments such as marriages, which would annoy the ruler's subjects forgiving oaths made by the vassals to the ruler and even excommunicating the ruler, thereby undermining his moral legitimacy. Eventually, the ruler would have to give in to the pope and accept a bishop. The longer a local ruler could hold out against the pope, the more leverage the ruler had to demand a bishop who suited his interests.

In a region where the pope's influence was weak, the local priests might have performed sacraments anyway, having calculated that defying the pope was not as dangerous as angering their parishioners the ruler's vassals might have honored their oaths anyway because the pope could not protect them from their lord's wrath and the subjects might still have respected their ruler despite excommunication.

If the pope's influence in a diocese was weak, the local ruler could force the pope to choose between getting the tax revenue and appointing a loyal bishop. If said diocese was relatively poor, the pope would stubbornly hold out until the local ruler accepted the pope's choice of bishop. During this standoff, the pope would not get any money from the diocese, but this was fine with him because the diocese didn't yield much money anyway. But if said diocese was prosperous, the pope wanted to resolve the dispute more quickly so that he could sooner get that ample revenue flowing into his coffers, and so he was more inclined to let the local ruler pick the bishop.

A local secular ruler could stimulate the economy of his domain, and thereby collect more tax revenue, by giving his subjects more liberty and more participation in politics. The local ruler must raise enough tax revenue so that he can provide sufficient rewards to his essential supporters in order to secure their loyalty. But liberalization and democratization would also make his subjects more assertive, which in itself makes the ruler's hold on power less secure. Generally, a shrewd ruler should permit his people just enough liberty that he can raise sufficient tax revenue to provide his essential supporters with just enough rewards to keep them loyal (see selectorate theory for a thorough explanation of these trade-offs). In this specific context, the ruler of a diocese also had to consider whether to raise additional money, by risking liberalization, to convince the pope to compromise on the choice of bishop.

Under this incentive structure, if the pope's influence in a region was strong, the local ruler would see little point in liberalizing his state. He would raise more tax revenue, but it would not be enough to get out from under the pope's thumb which was just too strong. Liberalization would make his people more assertive and the pope would incite them to revolt. The pope would get both the money and his choice of bishop. Thus, the local ruler decided that oppressing his people was the more optimal strategy for political survival.

On the other hand, if the pope's influence in the region was weak, the local ruler calculated that liberalizing his state, thereby making it more prosperous, could give him enough leverage to get his choice of bishop. The pope would try to incite the people to revolt, but to weak effect. Thus, the local ruler could hold out for longer against the pope, and the pope would concede. The local ruler would get his preferred bishop, and the pope would get the money.

In the Catholic regions of Europe, the pope's influence was weaker the further away a region was from Rome because in general it is difficult to project power over long distances and across difficult terrain such as mountains. This, Bueno de Mesquita argues, is why the northern regions of Europe, such as England and the Netherlands, became more prosperous and free than the southern regions. He further argues that this dynamic is what enabled the Protestant Reformation, which mostly happened in northern Europe. The northern parts of Europe were so prosperous and the influence of the pope there was so weak, their local rulers could reject the pope's bishops indefinitely.

Science fiction writer Poul Anderson's novel The Shield of Time (1980) depicts two alternate history scenarios. In one, the imperial power completely and utterly defeated the Papacy, and in the other, the Papacy emerged victorious with the imperial power humbled and marginalized. Both end with a highly authoritarian and repressive 20th century that is completely devoid of democracy or civil rights. The conclusion stated by a protagonist is that the outcome in actual history (neither power gained a clear victory, with both continuing to counterbalance each other) was the best from the point of view of human liberty.

Dire Consequences of Suicide in the European Middle Ages - History

Suicide is an important, even urgent, topic, as the number of suicides has increased over the past 50 years, with about 1 million people taking their lives annually. In the United States, that amounts to about one suicide every 14 minutes (i.e., on average, three during our podcast). And yet, at first sight even a concept as seemingly obvious as suicide poses some significant problems right at the outset, when we consider a proper definition of the term.

For instance, I think we would all agree that Hitler committed suicide in his bunker toward the end of World War II, but what about Socrates, or Jesus? They didn’t directly take their lives, but they very purposefully did not take any action that could have easily avoided their deaths either.

However, friends, family and even the state do have a compelling interest in intervening — in a compulsory manner if necessary — in all cases in which these two conditions do not hold, and the philosophical justification for such intervention can arch back to an extended and updated version of Aristotle’s position.

One thing I definitely don’t go for is any kind of “sanctity” argument in opposition to suicide. I do not even consider for a moment, of course, any religious version of it, since I reject religion as a source of either knowledge or wisdom. The secular version, however, owes us an explanation of why, exactly, life is sacred, and no, Kant’s bizarre argument about the source of the moral law just isn’t going to cut it.

Again, both the Stoics and Hume, seems to me, got it exactly right: life is valuable to the individual who is alive if it can be pursued according to certain conditions (e.g., it yields sufficient pleasures, the possibility of pursuing one’s own goals, is characterized by meaningful relationships, and so forth).

To quote Hume again, “The life of man is of no greater importance to the universe than that of an oyster.” Meaning and value are human concepts, and it is up to human beings — individually and societally — to make of them what they wish.

Massimo Pigliucci is Professor of Philosophy at the City University of New York.

The Great Famine (1315-1317) and the Black Death (1346-1351)

The 14th century was an era of catastrophes. Some of them man-made, such as the Hundred Years' War, the Avignon Papacy, and the Great Schism. These were caused by human beings, and we shall consider them a bit later. There were two more or less natural disasters either of which one would think would have been sufficient to throw medieval Europe into a real "Dark Ages": the Great Famine and the Black Death. Each caused millions of deaths, and each in its way demonstrated in dramatic fashion the existence of new vulnerabilities in Western European society. Together they subjected the population of medieval Europe to tremendous strains, leading many people to challenge old institutions and doubt traditional values, and, by so doing, these calamities altered the path of European development in many areas.

The Great Famine of 1315

By the beginning of the 14th century, however, the population had grown to such an extent that the land could provide enough resources to support it only under the best of conditions. There was no longer any margin for crop failures or even harvest shortfalls. At the same time, however, the Western European climate was undergoing a slight change, with cooler and wetter summers and earlier autumn storms. Conditions were no longer optimum for agriculture.

We have noted that there had been famines before, but none with such a large population to feed, and none that persisted for so long. A wet Spring in the year 1315 made it impossible to plow all of the fields that were ready for cultivation, and heavy rains rotted some of the seed grain before it could germinate. The harvest was far smaller than usual, and the food reserves of many families were quickly depleted. People gathered what food they could from the forests: edible roots, plants, grasses, nuts, and bark. Although many people were badly weakened by malnutrition, the historical evidence suggests that relatively few died. The Spring and Summer of 1316 were cold and wet again, however. Peasant families now had less energy with which to till the land needed for a harvest to make up for the previous shortfall and possessed a much smaller food supply in reserve to sustain them until the next harvest.

By the spring of 1317, all classes of society were suffering, although, as might be expected, the lower classes suffered the most. Draft animals were slaughtered, seed grain was eaten, infants and the younger children were abandoned. Many of the elderly voluntarily starved themselves to death so that the younger members of the family might live to work the fields again. There were numerous reports of cannibalism, although one can never tell if such talk was not simply a matter of rumor-mongering.

You might remember the story of Hansel and Gretel. Abandoned in the woods by their parents during a time of hunger, they were taken in by an old woman living in a cottage made of gingerbread and candy. They saw that the old woman was bringing in wood and heating the oven, and they discovered that she was planning on roasting and eating them. Gretel asked the woman to look inside the oven to see if it was hot enough, and then pushed her in and slammed the door. Like most of Grimm's Fairy Tales, this is a rather late tale, but it is illustrative of the grim possibilities with which the old tales for children are fraught.

The weather had returned to its normal pattern by the summer of 1317, but the pople of Europe were incapable of making a quick recovery. An important factor in this situation was the scarcity of grain available to be used as seed. Although historians are still unsure of the validity of the figures, records of the time seem to indicate that a bushel of seed was needed in order to produce four bushels of wheat. At the height of the hunger in the late Spring of 1317, starving people had eaten much of the grain normally set aside as seed, as wall as many of their draft animals.

Even so, any of the surviving people and animals were simply too weak to work effectively. But about ten to fifteen percent of the population had died from pneumonia, bronchitis, tuberculosis, and other sicknesses that the starving sufferers' weakness had made fatal, and there were consequently fewer mouths to feed. So Europe was able to recover, although only slowly.

It was not until about 1325 that the food supply had returned to a relatively normal state, and population began to increase again. Europeans were badly shaken however. The death rate had been high, and even nobles and clergy had perished from hunger. The world now seemed a less stable and "gentle" place than it had before the Great Famine. Another folk tale that arose about this time suggests a new and more violent attitude among the populace, the story of The Mouse Tower of Bingen

There is an old stone tower in the German city of Bingen, and it is still pointed out to visitors as the famous Mouse Tower of the Bishop of Bingen.

The Black Death of 1347-1351

During the next few years, the European economy slowly improved, and agricultural and manufacturing production eventually reached pre-famine levels. This return to normalcy was suddenly ended in the year 1347 by a disaster even worse than the Great Famine.

Since the failure of Justinian's attempt to reconquer the lands of the Western Empire in 540-565, Europe had been relatively isolated, its population sparse, and intercommunication among its villages slight. It was as if the continent were divided up into a number of quarantine districts. Although many diseases were endemic (that is, they were always present), contagious diseases did not spread rapidly or easily. So the last pandemic (an epidemic that strikes literally everywhere within a short time) to strike Europe had been the one brought to the West by Justinian's armies in 547. By the 14th century, however, the revival of commerce and trade and the growth of population had altered that situation. There was much more movement of people from place to place within Europe, and European merchants travelled far afield into many more regions from which they could bring home both profitable wares and contagious diseases. Moreover, the diet, housing, and clothing of the average men and women of Western Europe were relatively poor, and a shortage of wood for fuel had made hot water a luxury and personal hygiene substandard.

Contrary to popular belief, medieval people actually liked to wash. They particularly enjoyed soaking in hot tubs and, as late as the mid- thirteenth century, most towns and even villages had public bath houses not unlike the Japanese do today. The conversion of forest into arable land had reduced the supply of wood, however, and the bath houses began to shut down because of the expense of heating the water. They tried using coal, but decided that burning coal gave off unhealthy fumes (They were right, by the way) and abandoned the use of the stuff. By the mid-fourteenth century, only the rich could afford to bathe during the cold Winter months, and most of the population was dirty most of the time, even if they did not enjoy being so

The Black Death seems to have arisen somewhere in Asia and was brought to Europe from the Genoese trading station of Kaffa in the Crimea (in the Black Sea). The story goes that the Mongols were besieging Kaffa when a sickness broke out among their forces and compelled them to abandon the siege. As a parting shot, the Mongol commander loaded a few of the plague victims onto his catapults and hurled them into the town. Some of the merchants left Kaffa for Constantinople as soon as the Mongols had departed, and they carried the plague with them. It spread from Constantinople along the trade routes, causing tremendous mortality along the way.

The disease was transmitted primarily by fleas and rats. The stomachs of the fleas were infected with bacteria known as Y. Pestis . The bacteria would block the "throat" of an infected flea so that no blood could reach its stomach, and it grew ravenous since it was starving to death. It would attempt to suck up blood from its victim, only to disgorge it back into its prey's bloodstreams. The blood it injected back, however, was now mixed with Y. Pestis. Infected fleas infected rats in this fashion, and the other fleas infesting those rats were soon infected by their host's blood. They then spread the disease to other rats, from which other fleas were infected, and so on. As their rodent hosts died out, the fleas migrated to the bodies of humans and infected them in the same fashion as they had the rats, and so the plague spread

The disease appeared in three forms:
bubonic [infection of the lymph system -- 60% fatal]
pneumonic [respiratory infection -- about 100% fatal], and
septicaemic [infection of the blood and probably 100% fatal]

The plague lasted in each area only about a year, but a third of a district's population would die during that period. People tried to protect themselves by carrying little bags filled with crushed herbs and flowers over their noses, but to little effect. Those individuals infected with bubonic would experience great swellings ("bubos" in the Latin of the times) of their lymph glands and take to their beds. Those with septicaemic would die quickly, before any obvious symptoms had appeared. Those with respiratory also died quickly, but not before developing evident symptoms: a sudden fever that turned the face a dark rose color, a sudden attack of sneezing, followed by coughing, coughing up blood, and death.

It is a popular (although incorrect) belief that this latter sequence is recalled in a children's game-song that most people know and have both played and sung:

According to this conception, the ring mentioned in the verse is a circular dance, and the plague was often portrayed as the danse macabre, in which a half-decomposed corpse was shown pulling an apparently healthy young man or woman into a ring of dancers that included man and women from all stations and dignities of life as well as corpses and skeletons. The rosie is believed to represent the victim with his or her face suffused with blood, and the posie is the supposedly prophylactic bag of herbs and flowers. Ashes, ashes is the sound of sneezing, and all fall down! is the signal to reenact the death which came so often in those times.

Some Consequences of the Plague

The disease finally played out in Scandinavia in about 1351 [see Ingmar Bergman's film The Seventh Seal], but another wave of the disease came in 1365 and several times after that until -- for some unknown reason -- the Black Death weakened and was replaced by waves of typhoid fever, typhus, or cholera. Europe continued to experience regular waves of such mortality until the mid-19th century. Although bubonic plague is still endemic in many areas, including New Mexico in the American Southwest. it does not spread as did the Black Death of 1347-1351.

The effects of that plague and its successors on the men and women of medieval Europe were profound: new attitudes toward death, the value of life, and of one's self. It kindled a growth of class conflict, a loss of respect for the Church, and the emergence of a new pietism (personal spirituality) that profoundly altered European attitudes toward religion. Still another effect, however, was to kindle a new cultural vigor in Europe, one in which the national languages, rather than Latin, were the vehicle of expression. An example of this movement was Giovanni Boccaccio's The Decameron , a collection of tales written in 1350 and set in a country house where a group of noble young men and women of Florence have fled to escape the plague raging in the city.

These were natural disasters, but they were made all the worse by the inability of the directing elements of society, the princes and clergy, to offer any leadership during these crises. In the next few lectures we will examine the reasons for their failure to do so.

And Innocent Merriment

It was once the custom to follow every drama with a farce or ballet. I suppose that the theory was that the emotions of the audience were so exhausted by the passions that had been enacted, that they (the audience, not the emotions) needed a bit of good clean fun to restore the balance of their humors (I really should tell you about humors sometime). Following this venerable tradition, The Management now offers you a bit of doggerel.

"A sickly season," the merchant said,
"The town I left was filled with dead,
and everywhere these queer red flies
crawled upon the corpses' eyes,
eating them away."

"Fair make you sick," the merchant said,
"They crawled upon the wine and bread.
Pale priests with oil and books,
bulging eyes and crazy looks,
dropping like the flies."

"I had to laugh," the merchant said,
"The doctors purged, and dosed, and bled
"And proved through solemn disputation
"The cause lay in some constellation.
"Then they began to die."

"First they sneezed," the merchant said,
"And then they turned the brightest red,
Begged for water, then fell back.
With bulging eyes and face turned black,
they waited for the flies."

"I came away," the merchant said,
"You can't do business with the dead.
"So I've come here to ply my trade.
"You'll find this to be a fine brocade. "

And then he sneezed.

Next: The Hundred Years War 1336-1453

Lynn Harry Nelson
Emeritus Professor of
Medieval History
The University of Kansas
Lawrence, Kansas

Main keywords of the article below: death, high, barrier, late, europe, middle, black, clear, marks, ages, difference.

The Black Death marks the barrier between the High Middle Ages and the Late Middle Ages, and the difference in Europe before and after the Black Death is clear. [1] The Black Death was one of many catastrophes to occur following an increase in population during the High Middle Ages (1000-1300). [1]

During the Late Middle Ages, the Bubonic Plague (sometimes known as the "Black Death") swept through Europe. [2] This traumatic population change coming into the Late Middle Ages caused great changes in European culture and lifestyle. [1]

The emergence at this particular time has unknown causes, yet some speculate that the "mini ice age", a climatic change felt in Europe prior to the Black Death, may have served in the process. [1] Inspired by the Black Death, The Dance of Death, or Danse Macabre, an allegory on the universality of death, was a common painting motif in the late medieval period. [3]

Middle Ages for Kids: Black Death Plague Parents and Teachers : Support Ducksters by following us on or. [4] A: In the Middle Ages the Black Death, or "pestilencia’, as contemporaries called various epidemic diseases, was the worst catastrophe in recorded history. [5]

While our culture, in its increasing secularism, and in its sanitization and silencing of death, is radically different from that of the European Middle Ages, the survival of such images as those depicted in the Appalachian song demonstrates the continuity, albeit uncomfortable, between the macabre culture of the late Middle Ages and our own. [6] The Danse Macabre, or Dance of Death, was a popular artistic motif in the late Middle Ages. [6]

"Black Death" in Dictionary of the Middle Ages, volume 2, pp.257-67, says "between 25 and 45percent". [7] It's hard to imagine how scary life was in the Middle Ages during the Black Death. [4] As the agriculturally oriented manorial system which had dominated life during the High Middle Ages slowly failed, industry rose, yet another benefit left in the wake of the Black Death. [8]

With early industrialization to the late nineteenth century or even later, overcrowding, public hygiene and sanitation worsened, infectious diseases rose in intensity, and life expectancies often dipped dramatically below levels of the late Middle Ages and early modern period. 128 Yet these new levels of filth did not suddenly spark renewed outbreaks of the Black Death in Manchester or Calcutta, despite a probable increase in the density of human ectoparasites. [9] In the Late Middle Ages (1340-1400) Europe experienced the most deadly disease outbreak in history when the Black Death, the infamous pandemic of bubonic plague, hit in 1347. [10]

Whereas in the late Middle Ages Islamic doctors had for centuries been advocating sensible measures like general cleanliness and the value of studying anatomy, Western healers prior to 1347 were still encumbered by the Medieval scorn of the body and ancient medical fallacies like the theory of humors. [8] Philip Daileader, The Late Middle Ages, audio/video course produced by The Teaching Company, 2007. [7] The development of secular literature written in the vernacular continued and accelerated in the Late Middle Ages. [11] The popular lines at the beginning of this booklet keenly illustrate several of the key concepts present in a discussion of death-culture in the late Middle Ages. [6] Europe had experienced a remarkable period of expansion during the High Middle Ages (1050-1300 CE) but that age of growth reached its limit in the later part of the thirteenth century (the late 1200's CE). [8]

Numerous illustrations survive from the Middle Ages of Death as a personified figure, usually a skeleton or cadaver, either rising from the grave or approaching dying figures with a dart in his hand. [6] This lyric displays the key reason for thinking about death in the Middle Ages: the salvation of the soul. [6] What of those souls who had confessed their sins before death but not yet done penance? A widely debated idea in the Middle Ages was the doctrine of Purgatory, a between place where souls were relegated to be "purified" by fire, ice, or other means, before achieving the ultimate reward of Paradise. [6]

There were two more or less natural disasters either of which one would think would have been sufficient to throw medieval Europe into a real "Dark Ages": the Great Famine and the Black Death. [12] It is only recently, in the age of mass-media, where photographs, motion pictures, and, more recently, the internet have exposed us to the devastation wrought by such natural disasters as the south Asian tsunami of 2004 and Hurricane Katrina, and to such unnatural disasters as the Holocaust of World War II, that a large portion of the world population has become exposed to horrific images akin to those presented by the Black Death. [6]

The consequences of the Black Death on the culture of late Medieval Europe are immeasurable and, needless to say, mostly negative. [8] A study published in Nature in October 2011 sequenced the genome of Y. pestis from plague victims and indicated that the strain that caused the Black Death is ancestral to most modern strains of the disease. [3] The Black Death, also known as the Great Plague or simply Plague, or less commonly as the Black Plague, was one of the most devastating pandemics in human history, resulting in the deaths of an estimated 75 to 200 million people in Eurasia and peaking in Europe from 1347 to 1351. [3] "Climate-driven introduction of the Black Death and successive plague reintroductions into Europe". [3] The dominant explanation for the Black Death is the plague theory, which attributes the outbreak to Yersinia pestis, also responsible for an epidemic that began in southern China in 1865, eventually spreading to India. [3] The idea that the Black Death was solely caused by the bubonic strain of plague has been questioned. [1] Based on genetic evidence derived from Black Death victims in the East Smithfield burial site in England, Schuenemann et al. concluded in 2011 "that the Black Death in medieval Europe was caused by a variant of Y. pestis that may no longer exist." [3] Detailed study of the mortality data available points to two conspicuous features in relation to the mortality caused by the Black Death: namely the extreme level of mortality caused by the Black Death, and the remarkable similarity or consistency of the level of mortality, from Spain in southern Europe to England in north-western Europe. [3]

The Black Death ravaged Europe for three years before it continued on into Russia, where the disease was present somewhere in the country 25 times between 1350 and 1490. [3] In the span of three years, the Black Death killed one third of all the people in Europe. [1] When the Black Death had finally passed out of Western Europe in 1350, the populations of different regions had been reduced greatly. [1] He was able to adopt the epidemiology of the bubonic plague for the Black Death for the second edition in 1908, implicating rats and fleas in the process, and his interpretation was widely accepted for other ancient and medieval epidemics, such as the Justinian plague that was prevalent in the Eastern Roman Empire from 541 to 700CE. [3]

The second pandemic of bubonic plague was active in Europe from AD1347, the beginning of the Black Death, until 1750. [3] They assessed the presence of DNA/RNA with polymerase chain reaction (PCR) techniques for Y. pestis from the tooth sockets in human skeletons from mass graves in northern, central and southern Europe that were associated archaeologically with the Black Death and subsequent resurgences. [3] The Black Death brought about great change in attitude, culture, and general lifestyle in Europe. [1] "Duration assessment of urban mortality for the 14th century Black Death epidemic". [3] In 1908, Gasquet claimed that use of the name atra mors for the 14th-century epidemic first appeared in a 1631 book on Danish history by J.I. Pontanus : "Commonly and from its effects, they called it the black death" ( Vulgo & ab effectu atram mortem vocatibant ). [3] In 1984 zoologist Graham Twigg produced the first major work to challenge the bubonic plague theory directly, and his doubts about the identity of the Black Death have been taken up by a number of authors, including Samuel K. Cohn, Jr. (2002 and 2013), David Herlihy (1997), and Susan Scott and Christopher Duncan (2001). [3] The fact that accounts from the time indicate that the Black Death killed virtually all infected people raises doubt. [1] This implies that around 50 million people died in the Black Death. [3] The Black Death is estimated to have killed 30-60% of Europe's total population. [3] The data is sufficiently widespread and numerous to make it likely that the Black Death swept away around 60 per cent of Europe's population. [3] Those groups most ravaged by the Black Death had already suffered from famine earlier in the fourteenth century as storms and drought caused crop failures. [1] Besansky NJ, ed. "Distinct Clones of Yersinia pestis Caused the Black Death". [3] "A draft genome of Yersinia pestis from victims of the Black Death". [3] In October 2010, the open-access scientific journal PLoS Pathogens published a paper by a multinational team who undertook a new investigation into the role of Yersinia pestis in the Black Death following the disputed identification by Drancourt and Raoult in 1998. [3] The main theme that one can derive from the Black Death is that mortality is ever present, and humanity is fragile, attitudes that are ever present in Western Nations. [1] By the end of 1350, the Black Death subsided, but it never really died out in England. [3] One development as a result of the Black Death was the establishment of the idea of quarantine in Dubrovnik in 1377 after continuing outbreaks. [3] One of the greatest effects of the Black Death was in the realm of laboring classes. [1] Those black swellings on victims are what give the Black Death its name. [1] Monks and priests were especially hard-hit since they cared for victims of the Black Death. [3]

The Black Death pandemic swept through Europe during the Middle Ages leading to high mortality from plague. [13] The so-called Black Death, or pandemic of the Middle Ages, began in China and made its way to Europe, causing the death of 60% of the entire population. [14]

The word plague had no special significance at this time, and only the recurrence of outbreaks during the Middle Ages gave it the name that has become the medical term. [3] "Bubonic plague was a serial visitor in European Middle Ages". [3] With all of these conditions arising from the High Middle Ages, it was only a matter of time before the population was curbed by disaster. [1]

Plague is a bacterial disease that is infamous for causing millions of deaths due to a pandemic (widespread epidemic) during the Middle Ages in Europe, peaking in the 14th century. [14] In the Middle Ages, plague was known as the " Black Death." [14] Though it took several years for real wages to rise in England in the aftermath of the Black Death (in fact, they may have actually dropped in the period immediately after the epidemic), by the late 14 th century real wages had risen sharply to their medieval peak. [15] If people were less frail (healthier) on average after the Black Death than before epidemic, a higher proportion of the post-Black Death population should have survived to older ages compared to the pre-Black Death population. [15] By targeting frail people of all ages, and killing them by the hundreds of thousands within an extremely short period of time, the Black Death might have represented a strong force of natural selection and removed the weakest individuals on a very broad scale within Europe. [15] Together, these results indicate enhanced survival and improvements in mortality after the Black Death, and by inference, improved health at least at some ages in the post-Black Death population. [15] These results indicate reduced risks of mortality across all ages after the Black Death compared to the pre-Black Death population. [15] DeWitte SN (2010) Age patterns of mortality during the Black Death in London, A.D. 1349-1350. [15] His findings suggested that the ratio of individuals above the age of 60 relative to those between ages 20� increased after the Black Death in some areas, which would indicate that survival improved following the epidemic. [15]

Even though medicine in the Middle East was marginally more advanced than European medicine, physicians in both regions were unsuccessful at treating the Plague however, the Black Death served to promote medical innovations that laid the foundations of modern medicine. [13] While Chalmelli's assessment may have been overly sanguine, last wills and testaments 63 and monastic and confraternal necrologies point in this same downward direction, charting a rapid adaptation between the Black Death pathogen and human hosts. 64 All of these late medieval records pose problems for quantitative analysis. [9]

Life in the late Middle Ages could be nasty, brutish, and short, with famines, plagues, and wars emphasizing that death could be sudden and painful. [16] The Black Death, formerly known as the Bubonic Plague, is by far one of the most horrifying and yet the most fascinating subjects toed to the Middle Ages. [17]

Too many rats across Europe had gained resistance to Plague for the pathogen to build up the momentum necessary to launch an all-out epidemic like the Black Death. [8] A rough estimate is that 25 million people in Europe died from plague during the Black Death. [18] The Black Death terminated serfdom in Europe serfs were virtual slaves, peasants who were "tied to the land" and obliged to farm certain areas for no other reason than that their ancestors hadthe impact of Plague on society is clearly visible when one compares those places where it hit hard with those it didn't. [8] To wit, data suggest that people whose ancestors come from those areas of Europe which suffered most heavily during the Black Death coincide with populations today which exhibit lower rates of mortality from AIDS. [8] In crowded areas where black rats and their fleas were common, or in small rural hamlets where these hosts lived alongside the human population, the mortality was staggering, and archaeologists have in recent decades uncovered the remains of small villages that essentially disappeared during the period of the Black Death. [6] The common black rat, rattus rattus, was the host to the oriental rat flea, and the primary means of plague transmission during the Black Death. [6] With our better understanding of historic plague, other diseases among animals such as bird-flu and swine-flu are carefully monitored today in case they develop into person-to-person infections resulting in high mortality as witnessed in the Black Death. [5] For all the destruction Yersinia pestis left in its wake, people at the time of the Black Death never knew this bacillus was the cause of the Plague. [8] "Re: How many people recovered from Black Death (Bubonic Plague)". [7] Starting with The Black Death - its deadliest attack - plague later returned to Britain in 1361 (when it affected especially younger and elderly people) 1374, and regularly until it disappeared shortly after the Great Plague of 1665. [5] The plague was not called the Black Death until many years later. [4] Where the numbers of casualties can be calculated with any certaintyfor instance, in urban centers like Parisit's clear that between 1348 and 1444 the Black Death and recurrences of Plague cut the population by half, if not more. [8] In 1347, though, a new plague struck Europe and hit rich and poor alike: the Black Death. [11] The Black Death refers to the period in Europe from approximately 1347 to 1353, when bubonic plague ravaged the European population and initiated a long-term period of cultural trauma from which, one could argue, we have not yet completely recovered. [6] The Black Death was one of the most devastating pandemics in human history, peaking in Europe between 1347 and 1350 with 1/3 of the population killed. [7] The most recent estimate is by Ole J. Benedictow, who in his magisterial The Black Death 1346-53: The Complete History estimates the total population loss at 65% in both Asia and Europe. [6] It's probably safe to say that something on the order of a quarter to a third of the population of Europe died during the Black Death, amounting to as many as twenty million people. [8] Some contemporaries realised that the only remedy for plague was to run away from it - Boccaccio’s Decameron is a series of tales told among a group of young people taking refuge from the Black Death outside Florence. [5] The Black Death is the single most significant disease in Western civilization to date, a true and literal plague. [8] Historian Walter Scheidel contends that waves of plague following the initial outbreak of the Black Death had a leveling effect that changed the ratio of land to labor, reducing the value of the former while boosting that of the latter, which lowered economic inequality by making landowners and employers less well off while improving the lot of the workers. [7] The Black Death is widely believed to have been the result of plague, caused by infection with the bacterium Yersinia pestis. [18] The Black Death, a pandemic of both bubonic and pneumonic plague that was carried on shipboard from the Levant, reached Provence in 1347, ravaged most of France in 1348, and faded out only in 1350. [18] Although there is some debate among historians as to the epidemiology of the Black Death, most historians argue that this was the Plague (in two forms, bubonic and pneumonic). [11] Black Death Plague victims during the Black Death, 14th century. [18] Black Death Blessed Bernard Tolomei Interceding for the Cessation of the Plague in Siena, oil on copper by Giuseppe Maria Crespi, c. 1735. [18] The epidemic returned to Europe several times, but wasn't as bad as the Black Death period. [4] Black Death, pandemic that ravaged Europe between 1347 and 1351, taking a proportionately greater toll of life than any other known epidemic or war up to that time. [18]

The Black Death is the name for a terrible disease that spread throughout Europe from 1347 to 1350. [4] The black death: natural and human disaster in Medieval Europe (1. [7] The Black Death was, thus, destructive not only to the physical well-being of Medieval Europe but also its general mental health, a situation which had as much to do with the timing of its onset as anything else. [8] In the 1400s the Black Death periodically revisited Europe and populations continued declining in many regions, but the most intensive demographic impact had already been felt by 1400. [11] The population in England in 1400 was perhaps half what it had been 100 years earlier in that country alone, the Black Death certainly caused the depopulation or total disappearance of about 1,000 villages. [18] Most average estimates state that about one-third of the population died from the disease in the years spanning the Black Death. [6] Some cities (like Milan and Nuremberg) escaped the devastation of the Black Death, but in London and many other great cities as many as half of the population died of the disease. [11] Modern genetic analyses indicate that the strain of Y. pestis introduced during the Black Death is ancestral to all extant circulating Y. pestis strains known to cause disease in humans. [18] Devastating as the Black Death was to humankind in the fourteenth century, it is important to remember a central feature of this disease. [8] The disease finally played out in Scandinavia in about 1351, but another wave of the disease came in 1365 and several times after that until -- for some unknown reason -- the Black Death weakened and was replaced by waves of typhoid fever, typhus, or cholera. [12] Professional self-torturers who went from town to town, the flagellants scourged themselves for a fee to bring God's favor upon a community hoping to avert the bubonic plagueaccording to Medieval logic, the Black Death was a punishment for sin, and its atonement must be paid in real, physical termsflagellants served, then, as a means for people to buy that remission from sin at the price of migrant "whipping boys." [8] For reasons that are still debated, population levels declined after the Black Death's first outbreak until around 1420 and did not begin to rise again until 1470, so the initial Black Death event on its own does not entirely provide a satisfactory explanation to this extended period of decline in prosperity. [7] The Black Death hit the culture of towns and cities disproportionately hard, although rural areas (where most of the population lived) were also significantly affected. [7] Although bubonic plague is still endemic in many areas, including New Mexico in the American Southwest. it does not spread as did the Black Death of 1347-1351. [12] Anti-Semitism greatly intensified throughout Europe as Jews were blamed for the spread of the Black Death. [18] It lingered in parts of Western Europe and was introduced to Eastern Europe after the Black Death. [7] …plague pandemic was the dreaded Black Death of Europe in the 14th century. [18] Sparsely populated Eastern Europe was less affected by the Black Death and so peasant revolts were less common in the fourteenth and fifteenth centuries, not occurring in the east until the sixteenth through nineteenth centuries. [7] The manorial system was already in trouble, but the Black Death assured its demise throughout much of western and central Europe by 1500. [7] The second pandemic of the Black Death in Europe (1347-51) Encyclopædia Britannica, Inc. [18] Much of the infrastructure of Europe was gone when the Black Death finally subsided. [4] By any measure taken, the Black Death was world-shattering and shows how even the smallest of things, the microbial world, can at times steer the course of human civilization. [8] Although the Black Death highlighted the shortcomings of medical science in the medieval era, it also led to positive changes in the field of medicine. [7] Famine and disease, and especially the Black Death, hit the towns as hard if not harder than they did the villages. [11] A theory put forth by Stephen O'Brien says the Black Death is likely responsible, through natural selection, for the high frequency of the CCR5-Δ32 genetic defect in people of European descent. [7] Because fourteenth century healers were at a loss to explain the cause of the Black Death, Europeans turned to astrological forces, earthquakes, and the so to thought poisoning of wells by Jews as possible reasons for the plague's emergence. [7] Consequences of the Black Death included a series of religious, social, and economic upheavals, which had profound effects on the course of European history. [7] The Great Mortality: an intimate history of the Black Death. [6] Norwegian historian Ole J. Benedictow ('The Black Death: The Greatest Catastrophe Ever ', History TodayVolume 55 Issue 3 March 2005 ( ) cf. Benedictow, The Black Death 1346-1353: The Complete History, Boydell Press (7 Dec. 2012), pp. 380ff.) suggests a death rate as high as 60%, or 50 million out of 80 million inhabitants. [7] The psychological effects of the Black Death were reflected north of the Alps (not in Italy) by a preoccupation with death and the afterlife evinced in poetry, sculpture, and painting the Roman Catholic Church lost some of its monopoly over the salvation of souls as people turned to mysticism and sometimes to excesses. [18] Many people thought that the Black Death was punishment from God. [4] When the rat and the flea brought the Black Death, Jews, with better hygiene, suffered less severely [7] Although the outbreak of the Black Death in 1348 dominated the economy of the 14th century, a number of adversities had already occurred in the preceding decades. [18] Nor does it help that prior to the Black Death many local governments had collapsed in the wake of the Great Famine of 1315-17 and the outbreak of the Hundred Years' War (1337-1453). [8] In 1337, on the eve of the first wave of the Black Death, England and France went to war in what would become known as the Hundred Years' War. [7] The Black Death reached the extreme north of England, Scotland, Scandinavia, and the Baltic countries in 1350. [18] King Death: The Black Death and its aftermath in late-medieval England. [6] The motif also survives in numerous manuscript illuminations, and, most interestingly, in over fifty wall-paintings from after the period of the Black Death. [6] The Black Death precipitated some change for the good, at least among those of the working class who survived its onslaught. [8] Black Death A town crier calling for the families of victims of the Black Death to "bring out your dead" for mass burial. [18] Wages of labourers were high, but the rise in nominal wages following the Black Death was swamped by post-Plague inflation, so that real wages fell. [7] There can be little doubt that the Black Death began before the first historical accounts record its presence, but where or how is unclear. [8] The enforcement of the statutes of labourers during the first decade after the black death, 1349-1359 (1908). [7] The Black Death also inspired European architecture to move in two different directions: (1) a revival of Greco-Roman styles that, in stone and paint, expressed Petrarch's love of antiquity, and (2) a further elaboration of the Gothic style. [7]

The general hygiene of Europeans improved after the Middle Ages, but while people may, in fact, have started bathing more after the fourteenth century, rats and fleas which are central in spreading Plague did not adopt better standards of health. [8] The more well-connected and vital Europe of the years following the High Middle Ages proved a much better host for this plague. [8] As a rule, efforts to limit Plague in the Middle Ages served mainly to disperse it more widely, since Medieval quarantines involved sequestering the infected in a building. [8] Unfortunately, the people in the Middle Ages didn't know that the disease was carried by rats. [4] When people got the disease in the Middle Ages, they almost always died. [4] Coming off the peak of the High Middle Ages, people had already been rattled by the disintegration of the Church, the Famine of 1315-1317 and the outbreak of the Hundred Years' War. [8] Although agricultural productivity had increased in the High Middle Ages, population growth had exceeded the limits of the agricultural economy by 1300. [11] Whereas in the High Middle Ages a warm, dry climate had predominated, by the turn of the fourteenth century global weather patterns changed for the colder and wetter. [8] There was no cure for bubonic plague in the Middle Ages, none indeed until the discovery of antibiotics in the modern age. [8] This made larger cities and towns, which were very dirty during the Middle Ages, especially dangerous as there were lots of rats there. [4] Painters like Gottio (1267-1337), who worked in Florence, sought to imitate nature in their painting and strived for naturalistic effects that contrasted strongly with the more abstract and stylized art of the early middle ages. [11]

Although scholastic writers in the Late Middle Ages typically wrote in Latin, authors such as Dante and Chaucer wrote in Italian and English respectively, allowing a wider audience to enjoy their work. [16]

Observed decreases in mortality levels during medieval plague epidemics after the Black Death might reflect molecular changes in the pathogen responsible for the epidemic, and subsequent plague outbreaks, that rendered it less virulent rather than reflecting changes in health and susceptibility within the human host population. [15] The results of this study are particularly striking given that the Black Death was just the first outbreak of medieval plague, and the period after the epidemic was characterized by repeated crisis mortality resulting in particular from repeated outbreaks of plague. [15] The results indicate that there are significant differences in survival and mortality risk, but not birth rates, between the two time periods, which suggest improvements in health following the Black Death, despite repeated outbreaks of plague in the centuries after the Black Death. [15]

Perhaps people who survived the Black Death and their descendants were generally less frail and less likely to die from a variety of causes (including plague) compared to the pre-epidemic population because of heightened immune responses or reduced disease susceptibility, i.e. traits that were selectively favored during the epidemic. [15] These subsequent outbreaks of medieval plague might have prevented population recovery following the Black Death. [15] Recent molecular analyses of bone and tooth samples of people who died during the Black Death have yielded DNA from the causative pathogen of the medieval epidemic, Yersinia pestis (which continues to affect human populations today by causing bubonic plague) –. [15] They have compared these results to the overland transmission speeds of the twentieth-century bubonic plague and have found that the Black Death travelled at 1.5 to 6 kilometres per day—much faster than any spread of Yersinia pestis in the twentieth century. 20 The area of Europe covered over time by the Black Death in the five years 1347 to 1351 was even more impressive. [9]

He then went further, maintaining that the reverse transmission of plague from humans to rats, other mammals, or other humans was highly unlikely: the concentration of the bacillus in humans is far too low to transmit the plague effectively to other animals or humans. 108 But when Hirst came to explain the Black Death, for some reason he ignored his earlier conclusions and speculated that its person-to-person spread might be explained by the human flea. [9] In addition to failing to explain the rapid and devastating spread of the Black Death or other plagues such as those of 1629� through isolated rural districts, such an urban transmission has yet to happen even on a minuscule scale in urban areas since 1894, even in cities such as Dakar and St Louis (Senegal) where both plague and human fleas have been plentiful. [9] After three years of work, Gérard Chouin is adamant that the medieval-era bubonic plague epidemic, the Black Death, spread to Sub-Saharan Africa and killed many people there as it did in Europe and the Mediterranean basin in the 14th century. [13] By the following August, the plague had spread as far north as England, where people called it "The Black Death" because of the black spots it produced on the skin. [19] These characteristics of plague transmitted by Pulex irritans do not, moreover, tally well with the epidemiology of the Black Death and its subsequent strikes through the early modern period or with the habits and conditions of its people. [9] The authors admit that the descriptions of chroniclers do not match the long incubation period of 32 days and a 37-day infectious period argued by Susan Scott and Christopher Duncan, Biology of plagues: evidence from historical populations, Cambridge University Press, 2001, pp. 24, 128𠄹 and Return of the Black Death, op. cit., note 22 above, pp. 155�. [9] Given that catastrophic plague outbreaks were characteristic of the post-Black Death period, but not of the pre-Black Death period considered here, one might reasonably assume that health and survival declined following the Black Death. [15] Perhaps most vexing for those who wish to label the Black Death Yersinia pestis has been the drastic difference between the transmission of the two diseases: one travelling with astonishing speed and efficiency the other discovered early on by plague commissioners to have been hardly contagious at all, especially in bubonic form. [9] According to chroniclers, places such as Trapani on the west coast of Sicily became totally abandoned after 1348. 48 Further, although later strikes of plague in the seventeenth century were not as widespread as waves of the Black Death in the latter half of the fourteenth century, they could be as devastating for cities such as Genoa and Naples in 1656𠄷, which had not experienced plague for 120 years. [9] By contrast, the plague cycles of the Black Death and its subsequent strikes over the first hundred years of its history were radically different. [9] The Black Death was one of the most devastating pandemics in human history, resulting in the deaths of an estimated 75-200 million people and peaking in Europe in the years 1348-1350. [10] The medieval Black Death ( c. 1347-1351) was one of the most devastating epidemics in human history. [15] Given that the mortality associated with the Black Death was extraordinarily high and selective, the medieval epidemic might have powerfully shaped patterns of health and demography in the surviving population, producing a post-Black Death population that differed in many significant ways, at least over the short term, from the population that existed just before the epidemic. [15] This study examines whether the selective mortality of the Black Death, combined with consequent rising standards of living after the epidemic, resulted in a healthier post-epidemic population in London compared to the pre-Black Death population. [15] These records, however, pertain to a specific group of men, who were cloistered, and as the Black Death was a contagious disease with a high household clustering of cases, the documentation may not provide an equivalent mortality among the general population. [9] Given that reproductive-aged individuals with relatively high frailty (i.e. an individual's risk of death relative to other members of the population ) were more likely to die during the Black Death than their age-peers with lower frailty, the epidemic might have affected genetic variation with respect to disease susceptibility or immune competence and thus, acted to reduce average levels of frailty in the surviving population. [15]

” He then went further: “We can be sure that the two greatest European pestilences, the plague of Justinian's reign (A.D. 542) and the Black Death of 1348, were both the result of the spread of the plague bacillus. [9] Yersinia pestis, the bacteria that caused Justinian's Plague and the Black Death, was once only able to cause a mild gastrointestinal infection. [13] This and other essays in this collection fail to support Robert Sallares's assertion (�ology, evolution, and epidemiology of plague’, in ibid., pp. 231�, p. 258) that the distribution of the Justinanic plague (as well as that of the Black Death) was “patchy”, and thus resembled Yersinia pestis simply because some towns and regions were spared during particular plague waves. [9] Some of these plagues may have been Yersinia pestis of the rat–rat-flea variety, others appear to have been more of the contagious and inter-human Black Death sort. [9] If these scientists had compared the time–space propagation of the Black Death with other twentieth-century regions of plague, even subtropical ones such as China, the differences in orders of magnitude would be greater still. [9] Cohn SK Jr (2008) Epidemiology of the Black Death and successive waves of plague. [15] Restriction to a single geographic location means that differences between the pre- and post-Black Death samples can be attributed to the effects of the Black Death and changes in standards of living following the epidemic (note, however, that London was a tremendous draw for migrants in this period the possible effects of migration on the results of this study are addressed in the Discussion). [15] The results of this study also raise questions about the possible effects of migration, i.e. whether the pre- and post-Black Death patterns are artifacts of migration into the London after the epidemic rather than, or in addition to, reflecting the effects of the Black Death itself. [15]

A positive or negative estimate for the parameter representing the effect of the time period covariate on the hazard would suggest that those who died in the period following the Black Death faced increased or decreased risk of death, respectively, compared to those who died before the epidemic. [15] The possibility of increased immigration to London following the Black Death does not explain the differences in age-at-death distributions observed between the two time periods, the most striking of which is the higher proportion of older adults in the post-Black Death sample. [15] The samples used here are also derived from a period before the Black Death that better represents normal pre-epidemic mortality patterns, for comparison with the post-Black Death data, than the early 14 th -century data used by Nightingale. [15]

The arrival of the Black Death in England, which killed around a half of the national population, marks the beginning of one of the most fascinating, controversial and important periods of English social and economic history. [20] London drew substantial numbers of migrants from throughout England and beyond throughout the medieval period, both before and after the Black Death,. [15] This prejudice was nothing new in Europe at the time, but intensified during the Black Death and led many Jews to flee east to Poland and Russia, where they remained in large numbers until the 20th-century. [21] Spreading throughout the Mediterranean and Europe, the Black Death is estimated to have killed 30-60% of Europe's total population. [10] Cohn SK (2002) The Black Death transformed: disease and culture in early Renaissance Europe. [15] The Black Death - as it is commonly called - especially ravaged Europe, which was halfway through a century already marked by war, famine and scandal in the church, which had moved its headquarters from Rome to Avignon, France, to escape infighting among the cardinals. [21] Animation showing the spread of The Black Death from Central Asia to East Asia and Europe from 1346 to 1351. [10] The manorial system was already in trouble, but the Black Death assured its demise throughout much of Western and Central Europe by 1500. [10] In this article an array of dispersed sources for the Southern Netherlands together with a new mortmain accounts database for Hainaut show that the Black Death was severe, perhaps no less severe than other parts of western Europe. [13] These changes in standards of living resulted in large part from the massive depopulation caused by the Black Death, which reversed the pre-epidemic conditions of an excess population relative to resources. [15] These data exclude many members of the population and they begin very soon before the Black Death during a period of severe famine, including the Great Famine 1315� and the resulting Great Bovine Pestilence. [15] The Black Death ravaged the continent for three years before it continued on into Russia, killing one-third to one-half of the entire population in ghastly fashion. [21] Improvements in diet after the Black Death, and particularly decreases in social inequities in diet that presumably benefitted the majority of the lower status population of England, might have acted to reduce average levels of frailty in the population, perhaps more than any other factor associated with improvements in standards of living. [15] The books chapters offer original reassessments of key topics such as the impact of the Black Death on population and its effects on agricultural productivity and estate management. [20] This lack of total (or even partial) correspondence in the geographic origin of the skeletal and documentary datasets raises the possibility that the differences observed by Russell reflect population differences rather than the effects of the Black Death. [15] DeWitte S, Hughes-Morey G (2012) Stature and frailty during the Black Death: the effect of stature on risks of epidemic mortality in London, A.D. 1348-1350. [15] Previous studies using historical data have examined the demographic consequences of the Black Death but the results have been mixed, with some finding tentative evidence of improvements in survival and mortality and others finding that survival declined in the centuries following the epidemic. [15] The existence of migration into London following the Black Death does not necessarily undermine the conclusions made here about positive changes in mortality and, by inference, health following and resulting from the Black Death, thought it is certainly an important issue worthy of further study. [15] The evidence from this study that survival and mortality were affected in positive ways by the Black Death raise the question of what was the proximate cause of these changes. [15] The results of this study indicate that mortality and survivorship improved in the generations following the Black Death, and that the patterns observed are not simply an artifact of temporal changes in fertility. [15] This study investigates whether the combination of the selective mortality of the Black Death and post-epidemic improvements in standards of living had detectable effects on survival and mortality in London. [15] The study shows that people before the Black Death experienced gradually worsening living conditions after 1200. [20] Following the Black Death, the amount of money spent per capita on food increased, and people ate higher quantities of relatively high-quality wheat bread, meat, and fish, much of which was consumed fresh rather than salted as had been common prior to the epidemic. [15] Analysis of nitrogen isotope values, for example, might reveal whether people in general consumed substantially more animal protein following the Black Death than was true before the epidemic. [15] The Black Death was one of the most devastating epidemics in human history. [15] At least for the Black Death, such an assertion flies in the face of this disease's speed and distribution, as shown by George Christakos, Ricardo A Olea, Marc L Serre, Hwa-Lung Yu, and Lin-Lin Wang, Interdisciplinary public health reasoning and epidemic modelling: the case of Black Death, Berlin, Springer, 2005. [9] Are you a horror fan looking for something different to shake up your reading list? Kelly Evans might have just what you’re looking for in her latest novel, 'The Mortecarni', a medieval zombie mash up set around the time of the Black Death. [13] After the Black Death, there was a severe shortage of laborers, effectively ending the medieval system of serfdom, and consequently wages improved dramatically while prices for food, goods, and housing fell. [15] Raoult D, Aboudharam G, Crubezy E, Larrouy G, Ludes B, et al. (2000) Molecular identification by "suicide PCR" of Yersinia pestis as the agent of medieval black death. [15] Bos K, Schuenemann V, Golding G, Burbano H, Waglechner N, et al. (2011) A draft genome of Yersinia pestis from victims of the Black Death. [15] Schuenemann VJ, Bos K, Dewitte S, Schmedes S, Jamieson J, et al. (2011) Targeted enrichment of ancient pathogens yielding the pPCP1 plasmid of Yersinia pestis from victims of the Black Death. [15]

A mass burial of 48 bodies, known to be victims of the Black Death, has been discovered at the site of a 14th-century monastery hospital at Thornton Abbey in England. [13] The social and governmental response to the Black Death in England undermined the social strength of women’s property rights and created a late-medieval patriarchal structure qualitatively different from that of the earlier fourteenth century. [13] According to Dyer migration likely increased after Black Death as an expression of resistance against restrictions enacted under labor laws in England, such as attempts to prevent increases in wages after 1349. [15] Munro JH (2004) Before and after the Black Death: money, prices, and wages in fourteenth-century England. [15] Given that the number of workers was not only smaller than had existed before the Black Death, but that they had new opportunities for mobility and alternative employment if they found existing conditions unsatisfactory, employers increased not only wages but also payments in kind, such as extra food and clothing, to attract workers. [15] Taking his themes from the Four Horsemen of the Apocalypse, John Aberth describes how the lives of ordinary people were transformed by a series of crises, including the Great Famine, the Black Death and the Hundred Years War. [20] Though previous modeling work has shown that demographic perturbations, such as the Black Death, can have effects on age-at-death distributions that last for several decades,, substantial effects are relatively short-lived (i.e. up to 50 years). [15] A study earlier this year found that despite its reputation for indiscriminate destruction, the Black Death targeted the weak, taking a greater toll among those whose immune systems were already compromised. [21] This study invited us once more to take a cautious attitude towards the estimates of the impact of the Black Death later in the 14th century. [20] In addition to its potential as a selective agent operating upon intrinsic biological factors, the Black Death might also have shaped population patterns by severely altering exogenous factors that affected health and demography. [15] The Black Death resulted in the deaths of an estimated 75-200 million people--approximately 30% of Europe's population. [10] Tree-ring growth shows that the significant population decline in Norway began decades before the Black Death. [20] The age-at-death distributions from the pre- and post-Black Death samples suggest that survival improved following the Black Death, as the post-Black Death sample has a higher proportion of older adults. [15]

RANKED SELECTED SOURCES(21 source documents arranged by frequency of occurrence in the above report)

Watch the video: Αυτοκτονία! Επιλογή, θάρρος ή φυγή; (June 2022).


  1. Covell

    In general, frankly speaking, the comments here are much more entertaining than the messages themselves. (No offense to the author, of course :))

  2. Able

    Fly away finally ...

  3. De


Write a message

Video, Sitemap-Video, Sitemap-Videos