Can a diagnosis be a falsifiable H/T?

From Falsifiable Scientific Method
Jump to: navigation, search

Can a diagnosis, as in a medical diagnosis, be a falsifiable H/T? Simply stating that a disease of a certain name exists, without any explanation of causes and symptoms, makes no falsifiable predictions and is therefore not scientific. When explanations of causes and symptoms are present, scientific rigor is needed. One part of it is consistency. For example, the H/T that cells dividing in an uncontrolled way spread and cause potentially lethal damage (what is known as cancer) is coherent and falsifiable (and well supported, though that is besides the demarcation of scientific H/T from pseudoscience).

However, if it is, for example, claimed that webbed feet cause loss of vision on one's left eye, that is not a coherent line of causation. Even if it is formulated as a "H/T" with the "falsifiable" prediction of a statistical correlation between webbed feet and blindness on one's left eye, it is not a H/T on an actual cause. Statistical correlations do not prove common biological causes, any more than any other observation can be proof of any other specific H/T. Even if there were statistical support for a webbed feet/blind left eye correlation, it would not prove a common cause of, say, genetic or hormonal origin. Asking "how else could the correlation be explained?" is not valid proof of a claim. It is possible to mention other "explanations" merely as counter-examples, e.g. "there may be a tradition among doctors not to treat eye damage on the left eye of people with webbed feet". However, such explanations should be explicitly marked as mere counter-examples (unless they are made consistent and falsifiable H/Ts) and only used as "this is one possible example, there may be countless more". It is, in that case, an example to explain falsificationism as opposed to verificationism, not a claim that the example's content must be true.

Is it possible to formulate a falsifiable H/T saying "statistically correlated symptoms group a have a common cause x"? In principle, yes, but consistency and rigor is needed. For example, the H/T "psychopathic murderers commit mostly instrumental murder AND most of their victims are women, AND there is an underlying common cause" predicts that there should be significantly more economic and other instrumental motifs for murdering women than for murdering men. The observation that there is at least as much "egoistic" motifs in society for murdering men as there is for murdering women thus falsifies the H/T of an underlying common cause behind the grouping of "symptoms" that is commonly called "psychopathy". However, if the claim of such a common cause is immunized to criticism by saying that "if you say that psychopathy doesn't exist, you are a psychopath", the claim loses its falsifiability and therefore the most fundamental conditions for being a scientific hypothesis.

Contents

Issues with psychiatry

Psychiatry is not fundamentally different from psychology

When criticism is directed at assumptions that both psychology and psychiatry (and some parts of neurology, especially those that assume brains to be massively modular), it is sometimes claimed that they have nothing in common because they are different "fields". In other words, it is claimed that there are no doctrinary similarities merely because those who practice it have different titles and there are differences in what they are legally allowed to do. That is like saying that humanist criticism of Christianity expressed in a debate against a priest cannot be applied against Christianity expressed by a bishop. If the bishops said so, how seriously would you take their claim? Discipline limits standing in the way of empirical evidence being used to falsify hypotheses and theories are exactly what allow bunk to be published in the most respected journals. Imagine, as an analogous example, that someone claimed that car mechanics was exclusively about cars that functioned as they should, that knowledge about defects in cars was a completely separate field called car mechaniatry, and that the two had nothing to do with each other. How seriously would you take that claim?

If it is said that healthy brains work in a particular way, that claim does make predictions on what effects different defects in the brain can have. If, for example, a psychological theory says that language evolved before math and that the brain's language functions are necessary but not sufficient conditions for the mathematical functions, that theory is incompatible with any psychiatric theory that says either that all people with mathematics deficit are language impaired or that there are people with language deficits but no mathematics impairments.

The unscientific practice of pathologizing consistent application of hypotheses

One unscientific thing about psychiatry is that it makes claims and then pathologize people for drawing said claims to the conclusions that follow logically from them. For example, even though psychiatry says that testosterone increase the risk of violent crimes, they also deemed a female road accident victim "psychotic" for asking the ambulance personnel to give her female donor blood out of fear of catching psychopathic traits from the testosterone in male blood. What she had done was to take the premise created by psychiatry and apply it unto its logical consequences. Scientifically, no matter if a premise is true or false, it has logical consequences, it is just that at least some of the logical consequences of false premises are factually wrong. If there are false conclusions that follow from a premise, then the premise itself is false. If it is insane to draw all conclusions that follow from a premise, then the entire scientific method is insane. And without the scientific method, nothing can be scientific. And if nothing is scientific, psychiatry is not scientific.

An often regurgitated claim is "people are too complex to apply the same scientific method as in physics". However, without scientific rigor, the claims that people "are that way" have no way of extricating themselves from being simple self-fulfilling prophecies at the level of tribal magic. And psychiatry shows once and again that it lacks scientific rigor. Just that it decides what to consider pathological through votes! Science is not done by voting!

Overblowing the significance of a sense of self

Rigor considerations also apply when a H/T states a specific biological mechanism as a cause. For example, a "H/T" saying that "a disruption in the brain's gammasynchronic timekeeping that makes the patient feel like he or she does actions before deciding to do them causes schizophrenia, including voice hearing" contains no explanation as to why such a perception of time and decision making would cause voice hearing. Therefore, that interpretation of gamma-synchronicity and "schizophrenia" is not a scientific H/T. "Changing" the H/T to "schizophrenic perceptions of time and decision making are unrelated to voice hearing" actually creates a new H/T.

The claim that the ability to have one's false beliefs corrected by rational arguments requires a specific sense of self is flawed. It is possible to criticize a false belief that others believe in without believing in it oneself. Therefore, you do not have to think of yourself as a separate entity to be able to criticize your own false beliefs, thus ceasing to believe in them and moving closer to knowledge of the truth. The claim that anyone without a specific sense of self must be "psychotic", "delusional" or otherwise incapable (or even less capable on a gradual scale, black and white is not necessary to make this point) of rational discussion, is nonsense.

Said claims of the "importance" of a specific sense of self are especially absurd when they come from people who also say that "normal" brains are "wired" not to see the flaws in one's own beliefs. If that was the case, a brain with no sense of whose belief it is criticizing (its own or someone else's) would have a superior, not an inferior, ability to listen to and be corrected by rational arguments. It would be the "normal" brains that were the delusional, "psychotic" ones! It also would not be necessary to lack any sense of self to impartially criticize any claims. Simply having no sense of idea ownership would suffice.

Why are we aware that we cannot see radio waves? Anton-Babinski?

It is often claimed, in the context of "Anton-Babinski's syndrome", that a specific processing of sensory data in the brain is required to be aware of one's lack of sensory input in each regard in which it is missing. But why are we not all "Anton-Babinski blindness" style ignorant of our inability to see radio waves? How can we know that we do not see radio waves? After all, we did not evolve a "brain mechanism" for being aware of our lack of a radio sense!

Parrots can say "something is wrong with me", so what is psychosis?

Psychiatry claims that it is the lack of "insight" that something is "wrong" that distinguishes a psychosis. However, a parrot can say "there's something wrong with me". Saying that phrase has nothing to do with actually being able to know what is wrong (if it is indeed wrong) and if so, what to do about it.

Apart from the problem about knowing whether or not something is actually wrong (which requires criticism of all assumptions, as simply having a list of "right" and "wrong" does not protect the list itself from being equally wrong or more wrong than what it considers "wrong"), there is the fact that even if one could be sure that something is wrong, that would not help finding out what was wrong. Since knowing that something was wrong without being able to correct it would generate no evolutionary advantage over not knowing that there is something wrong, there is no way a "know that something is wrong" mechanism could have evolved before the ability to specifically know what went wrong and what can be done about it. And therefore such a "mechanism" cannot be a necessary condition for precise knowledge about what is wrong either. This means that brains do not work in a way that can be disrupted into "psychotic" by knocking out an ability to know that "something" is wrong. And therefore, psychiatry's definition of "psychosis" is bogus.

The ability to abandon a false belief when there is concrete evidence against it has nothing to do with believing that the false belief is due to a biological flaw in one's brain that must be drugged away. Kepler and Galileo were initially geocentrists, and they became heliocentrists when evidence turned up without believing that their former geocentrism was due to "mental illness" and without taking drugs against geocentrism.

Psychiatry's claim that so-called "extreme" beliefs picked up from others "culturally" are somehow different from those created by the individual is completely arbitrary and lacking any foundation in science. To say that "we now know what people in that-and-that culture did not know" is unscientific collectivism. To not take what an authority says for granted has nothing to do with inability to learn from empirical evidence. If there was such a trade-off, science would not have existed. This fact does not necessarily require a black and white distinction between "sane" and "psychotic", as a gradual trade-off would also skirt the necessary combination of disbelief in authority and trust in empirical evidence that is needed for science to exist at all. If you only tell a so-called "patient" who thinks the Earth is flat that the Earth is round and assume in false dilemma that the person must either take your word on it or have a fixed delusion, you cannot know if the person would abandon the flat Earth model if she or he flew on a high-altitude aeroplane and saw the curvature of the Earth with his or her own eyes.

Psychiatry's lack of science is also shown by cases in which people say that they are psychotic in one sentence, and that they do not think they are psychotic because they are psychotic in the next. At face value that may sound like the "patients" ramble inconsistently themselves, though on a closer inspection it turns out that the psychiatrists believe that the "patients" consider themselves sane even when they say that they consider themselves insane! That marks the psychiatrists as the originators of the contradiction, while the "patients" who are under the coercion of the psychiatrists may simply repeat the nonsense the psychiatrists say.

Cerebral Rubicon versus mirror tests

Cerebral Rubicon refers to the point at which a brain is both big enough and connective enough to produce speech. It is not the same thing as being sophont, but it is another threshold. Since computer simulations show that the baboon vocal tract is capable of speech anatomically, along with the already known fact that two year olds have no more descended larynges than chimpanzees and yet they speak, the anatomy model fails. It is important to note that not only brain size but also brain connectivity matters. Late infants to early toddlers tend to start speaking at a brain size about two thirds of adult size, equivalent of Homo erectus. However, since the connections between the brain's hemispheres in genetically unmodified humans first start functioning properly during the teens, the connectivity penalty for the stage at which speech begins means that a smaller brain could probably speak with better connectivity. Noteworthy is that while loss of speech related to Alzheimer's disease (which starts impairing connectivity before it kills neurons and shrinks the brain) also takes place at about two thirds of healthy brain size or Homo erectus size, loss of speech related to vascular dementia (in which death of neurons which shrink the brain is the cause of the loss of brain function from scratch) tends to happen slightly below half healthy brain size or Homo habilis size.

Even though chimpanzees are not at cerebral Rubicon, as shown by their lack of speech along with their brain size well below the demands of even the most generous connectivities, they (or some of them, at least) still pass mirror tests. They treat their reflection as a representation of themselves and not as another chimpanzee. In the infant to toddler transition, however, things are very different. Cerebral Rubicon comes at, on average, a year and a half. However, mirror test ability appears first at age two to three. This discrepancy does not support the claim that self-awareness is related to brain development. It can, however, be explained by the notion that concepts of self are cultural whims. The latter idea, that the concept of self is a cultural whim, combined with the fact that there are animal cultures, can explain why different studies often disagree on whether or not animals of the same species are self-aware.

This does not mean that the model described in "The origin of consciousness and the breakdown of the bicameral mind" is correct. As shown in "Issues with punishment" below, being considered incapable of "helping" one's actions is an advantage in societies that selectively punishes deliberate choices, so the hypothesis that moral and legal systems closer to "modern" standards would make voice hearing and lack of a defined sense of self less common is actually very unlikely to be true. It is far more likely to have the opposite effect! It does, however, debunk the biologistic model of sense of self and illusion of free will, and leaves the field open for other hypotheses and theories to emerge.

Given the possibility that there may have been societies that considered voice hearers criminally responsible and people who did not hear voices impune, it is still possible that bicameralism may develop into a falsifiable hypothesis. In that case, it would predict that the transition from bicameral mind to conscious mind should coincide with records of voice hearing being classified as a basis for criminal responsibility and not for impunity.

Mobility of small brains

Psychology, psychiatry and modularistic bastardizations of "neurology" often claim that an illusion of self and "ownership" of body parts is important to function motorically. That claim, however, ignores the fact that many animals with tiny brains maintain motor function. As for the claim of human locomotion requiring a particularly complex brain, it is debunked by the fact that early hominins such as Australopithecus sediba and later small-brained hominids like Homo naledi walked in a human-like way and had dexterous hands with small, chimp-sized brains.

The claim that there is a mechanism in the brain that disrupts movement in the absence of an illusion of self, and need such an illusion to be mitigated, is evolutionarily nonsensical. Given the fact that there was, at one point, animal ancestors that functioned in their motor skills without a sense of self, what would happen if a disruptive mechanism of psychology's description appeared? That mechanism would be selected against and eliminated by natural selection. It would not take over the population and start a selection for a "mitigating" illusion of self!

Historical variability of senses of self

There is historical evidence that the sense of self that psychology and psychiatry claim is "universal" to "sane" humans have not always been there in all cultures, not even in all civilized cultures. For example, the ancient Greeks did not believe that they made their decisions and came up with their ideas themselves, they thought that they were inspired and controlled by gods or fate. There were some ancient Greek atheists (debunking the claim that atheism is a purely modern phenomenon) and they were atomists of the cause and effect type, not believers in undetermined free will (and, for the record, the atomists did not believe in fate either, they believed in causal determinism which is something completely different from fatalism). And the ancient Egyptians believed themselves to have at least three spirits: Ba, Ka and Ach. Not a sense of being a unified person there! While some Native Americans believed that homosexuals had two spirits, ancient Egyptians believed that everyone had at least three!

While only recorded history can be directly documented in terms of sense of self, these differences do open the possibility for other differences in prehistory. One possibility is that early Homo sapiens did not think of themselves as owners of ideas. They may have thought of ideas as something to be equally scrutinized no matter who presented them, and since they had no sense of owning ideas could not feel offended by criticism of their "own" beliefs. True pre-intellectual property minds! Such a context would give Homo sapiens with their biologically modern brain capacity an adaptive upper hand over archaic hominins, instead of consuming nutrients in unconstructive justification of false beliefs as the kind of mind psychology believe in would have done. The model is supported by the fact that early humans survived dramatic climate change that required dramatic changes in lifestyle, which would have been impossible if they used their brains to justify their old "knowledge" and refuse to change when the environment changed.

There are also both past and present languages without gender pronouns. This debunks the claim of an universal human hardwiring to use gender pronouns.

The speed of evolution

Adaptive evolution

There are many examples of fast natural selection that dramatically change the incidence of characteristics in a small number of generations. Experiments with fox domestication in Russia have produced dramatic changes from the 1950s to the present, which in terms of generations is the equivalent of a few centuries to a few millennia for humans (uncertainty reflects the question of how relevant the difference in litter size is). It is certainly within a timeframe that is shorter than the time that human populations have diverged. There are also human examples, such as lactose tolerance, skin color, subcutaneous fat and the adaptation of hemoglobin to high altitude.

It is true that major evolutionary change, such as the evolution of completely new species, usually take much longer time, though hybridization is known to sometimes create new species in rapid timeframes like those mentioned above. That, however, does not prove that it is natural selection that is slow. It is just that most mutations are either neutral or deleterious, that much of the genome must be junk in order for many of the mutations to be neutral instead of too many of them being harmful, and that the chances at each time for beneficial mutations to appear are very low. That means that it, on average, takes a long time from one positively selected mutation to the next. In fact, genetic evidence shows that only about a hundred genetic changes that happened on the human line after the split from the chimpanzee line have been selected for in the ancestry of all modern humans (these numbers exclude local adaptations that separate different modern human populations, and they also exclude all neutral change such as genetic drift). Since early hominins evolved upright walking while their brains were still small and apelike, a significant portion of the hundred mutations are swallowed by adaptations to upright gait that have nothing to do with the brain.

That leaves the evolution of the human brain with a measly 30 to 70 genetic changes that are shared by all modern human populations regardless of ethnicity but not by chimpanzees and bonobos, even less if the differences in reproductive biology are due to mutations on the human line and not on the chimp line. Possibilities of explanations of the large apparent intelligence difference between apes and humans include large effects of each mutation or a critical threshold of brain capacity at which higher intellectual functions appear as an emergent phenomenon. There goes the immense genetic complexity of human brains that is claimed by psychiatry as some kind of buffer that could retain constant percentages of individual genetic variation over thousands of years. As if the issue that too many genetic differences with too little phenotypic effect of each genetic difference would cause the effect of the genetic differences to cancel each other out and thus prevent the existence of individual "pathology" was not enough.

Genetic drift and genetic bottlenecks

It is true that most of the long term changes in the genome are due to neutral mechanisms such as genetic drift and bottlenecks, not selection. But then, most of the genome is neutral too, so most of the genetic changes have no phenotypic effects. It is true that some genetic changes do have effects and that genetic drift and genetic bottlenecks can maladaptively "pick" harmful alleles at the expense of useful alleles (that is why inbreeding is risky). An useful gene that have been completely lost by genetic drift in one population can, however, be recovered by interbreeding with another population that have not lost it. It is extremely unlikely that multiple populations would lose the same good gene forever to genetic drift independently of each other. Interbreeding with Neanderthals, Denisovans and possibly others therefore disprove the claim that all modern humans share maladaptive traits from a genetic bottleneck in early Homo sapiens in Africa.

Genetic drift and genetic bottlenecks, by being non-adaptive, are also not capable of adapting different genetic changes to each other. Therefore they cannot account for the type of evolution slowed down by a need for each genetic change to be adapted to the others that psychiatry claims to act as a buffer against fast evolutionary change. Simply surviving a significant amount of genetic drift require an ability to survive with some parts of a set of genetic changes but not others, exactly the ability of survival that the psychiatric model of evolution denies.

Some different members in a population? How big populations?

Psychiatry claims that at least some forms of "mental illness" have been preserved by natural selection on positive effect of having a small number of members with different behavior in a group. For example, the claim of a genetic link between "mental illness" and creativity. While studies of social insects in real life as well as computer simulations of groups performing different tasks do support the notion that some individuals behaving differently than the others may be an advantage, they also clearly show that the favored percentage of different individuals vary with group size. A similar number of different individuals can fill such a function in a wide range of group sizes, lowering the favored percentage of different individuals in larger groups. Since human populations differ in group size, the model therefore does not predict similar percentages of "mental illness" in all human populations at all.

If the facts about rapid selection on pre-existing individual variation and limits to the complexity of genetics are ignored, and we pretend that psychiatry's claims of genetic mechanisms slowing natural selection itself down to the point of keeping Homo sapiens homogeneous are true, we inevitably wind up with predicted percentages of "mental illness" shaped by hunter-gatherer group sizes. Food supply limit hunter-gatherer groups to approximately 30 to 50 people (the groups of San people who have been documented at up to 80 people were those who "hunted" cattle and "gathered" on the farms of nearby farmers), even less in some ecosystems but we can go with 30 to 50 as the "main population" of human origins. If psychiatry's figure that 1% of people have schizophrenia was correct, that would leave every hunter-gatherer group with a major risk of, at any one point in time, not having a single schizophrenic member. If a few individuals with schizophrenia was as important for the group's survival as psychiatry claims, then human extinction would have been inevitable in Paleolithic times.

If we assume that there were many hereditary forms of mental illness that were all important for the survival of a group, genetic drift and population bottlenecks become next level problems. If much of genetic variation is non-adaptively lost by population bottlenecks and genetic drift, at least some of the positively selected varieties will go. If even one of the lost ones was absolutely indispensable for the survival of groups, as psychiatry claims when they try to explain why all of their listed "mental illnesses" appear to be present in all human populations with allegations that a population lacking even one of them would not survive, then the group would die out. And then early Homo sapiens would not have survived the bottleneck. There was still tens of thousands of years to go until the next interbreeding, and the possibility of adaptive introgression in the distant future cannot save a group that is facing immediate peril any more than the possibility of rain in the desert a hundred years into the future can save the life of someone who is dying from dehydration in the present.

Timing issues with language and schizophrenia

It is often claimed that schizophrenia is a flaw in brain development to which humans became vulnerable to due to a complex developmental pattern that was/is necessary for the acquisition of grammatical language. However, while some grammatical language is present by age 4, long sentences apparently with no grammatical limit to how long by age 6 and exists at a speed to error ratio indistinguishable from that of adults by age 10, psychiatry says that the brain is not fully developed until age 25 which is also said to be the most common age for the onset of schizophrenia. While psychiatry does acknowledge a bell curve of schizophrenia onset that peaks at age 25 and refers to schizophrenia in teenagers as early onset schizophrenia, it does consider schizophrenia in preteens virtually nonexistent. The individual difference model of it also predicts that differences in onset should be linked to differences in language development, so no way to decrease the gap by picking early onset schizophrenia and late development language. How can they seriously claim something that takes place AFTER the acquisition of grammatical language to be a necessary condition for the acquisition of grammatical language?! Are they going to claim that the Internet was a necessary condition for inventing steam engines?

It is true that it does not have to be a 4-6-10 versus 25 dichotomy. Are intermediate answers possible? Yes, they are. For example, within a connectionist view of the brain, it makes sense to place the emergence of adult capacity at efficient connection between the brain hemispheres which happens at some point during the teens (barring genetic and/or cybernetic enhancement) that varies from individual to individual. The notion that most teenagers are environmentally damaged adults only demand 10-20% redundancy of myelin, within the range of evolutionarily plausible back up. The notion that grammatical language was the most recent biological step in the evolution of human mental ability, which is implicit in the claim that Homo sapiens was the first hominin with the capacity for grammatical language, require more than 50% myelin redundancy, maybe as much as 75% redundant myelin which is not evolutionarily realistic given the nutrient cost. However, a compromise means both brain development that takes place after (and thus cannot be a necessary condition for) the emergence of grammatical language AND an emergence of adult capacity earlier than psychiatry claims. It gives psychiatry not one error less, but one more error. Any other intermediates would also give double error of psychiatry. Placing the emergence of "fully developed" brains even later than 25 (as some drug researchers do when they cite studies in Western countries on test subjects of mostly European descent that appear to show that marijuana ceases to be harmful to the brain at age 30 or so and use that as a sign of "maturity", ignoring the possibility that changes of gene activity with age may influence drug responses in a way that is simply analogous to genetic differences between populations) would retain the discrepancy too.

The claim that schizophrenia is an evolutionarily inevitable side effect of grammatical language is, thus, bull. And it applies not merely to ungrammatical "proto-language" but to actual grammatical language.

Poverty of the genome

There are also conflicts of cause, i.e. when there are many H/Ts that "compete" in the sense of not all of them being able to be true at the same time, and not only when there are direct contradiction between them as H/Ts. One such additional example is amount of available information.

For example, there are limits to how many harmful mutations natural selection can clean up during a certain number of generations. That limits how much functionally relevant DNA a naturally evolved organism can have, which is why the existence of junk DNA encompassing most of the human genome was predicted. While regulator DNA have been found outside the protein coding genes, relevant function have only been confirmed in about 5% of the human genome (about 150 million functional base pairs, even fewer functional sequences since base pairs are combined to functional sequences).

That limited genetic "disc space", poverty of the genome, restricts how many specialized biological mechanisms (including brain mechanisms) there can be, to well short of the massive modularity that underpins psychiatry's long list of specific diagnoses. The H/T "most of psychiatry's diagnoses are valid" therefore makes the blatantly false prediction that "the human genome and brain is degenerating many times too fast for natural selection to keep up, and did not evolve".

It have sometimes been claimed that the discovery of molecular repair mechanisms that repair DNA damage would somehow invalidate this mathematics. However, the molecular mechanisms cannot know if the mutation they are reverting is beneficial, neutral or harmful. Only natural selection could tell that, and it gets no chance if the mutation is instantly reverted. It is just a stupid automatic revert edit robot, one that does not know what it is reverting and simply has the same effect as a lower general mutation rate. Since the reverted mutations disappear in one generation or less time, following the genetic change over generations show the effective mutation rate (the net mutation rate that is calculated by subtracting the number of "repaired" or reverted mutations in the germline from the total number of germline mutations). With that come the same limits to how much DNA "repair" that is compatible with evolutionary adaptation as goes for how low mutation rates are compatible with evolutionary adaptation.

The limits to the amount of functional hereditary information that later became the foundation for predicting that most of the human genome is junk DNA was successfully predicted before the double helix structure of DNA was discovered. It came from simple math of effective mutation to selection ratios. These same limits apply to computer-simulated evolution as well, evolution that contains no molecules. If there is alien life, the same limit applies to aliens with molecules of heredity other than DNA as well. In the case of organisms that combine multiple molecules of heredity (which octopuses, squid and cuttlefish may be if the hypothesis that there is another molecule regulating gene reading variability in their ribosomes hold correct) the limit applies to the total amount of relevant hereditary information regardless of what molecular modality it is stored within. The limits apply to anything short of engineering guided by an intelligence with advanced knowledge of the exact mechanisms. And such an intelligence is the result of evolution, not an explanation of how it happened in the first place.

Dramatic mutations out-competing networks of imperceptible mutations

It is sometimes claimed that since most genetic factors that are known to have some correlation with organism characteristics have very small effects, evolution must be driven by very small changes and large changes must be very rare. When cases of one or two mutations having dramatic effects are shown, it is sometimes claimed that they are merely due to destructive mutations knocking down genetic interplay networks and that the evolution of complex functions happened in imperceptibly small steps. These claims, however, cannot account for how selection could possibly manage such vast amounts of functional genetic information. Huge masses of mutations would, if such a system were to appear, knock down all such immensely complex gene networks in one or a few generations. The omnigenic model is bullshit.

One reason why so few single gene varieties with large effects have been discovered may be that the methods that are most used pool too large populations and miss out on significant genes that are concentrated to individuals from small geographical areas. While the method to look for small groups of people who display high levels of health in spite of lifestyle factors that are usually associated with bad health as a method to discover relevant genes have only been used in a small number of cases, it have already yielded discoveries. For example, genes that protect people from some villages in northern Crete in Greece from cardiovascular diseases have been discovered this way (Lorraine Southam et al. Whole genome sequencing and imputation in isolated populations identify genetic associations with medically-relevant complex traits. Nature Communications 26 maj 2017. DOI: 10.1038/NCOMMS15606).

There is at least one (there may be more) different possible interpretation(s) of the many small effect alleles: ongoing pseudogenization. Much of what is junk DNA in some organisms have been shown to be evolutionarily related to functional genes in other organisms. It is likely that there are ongoing cases, DNA sequences that are in the process of losing their functions. The limits of how much functional DNA there can be may cause pre-existing networks small effect genes that increases a positively selected characteristic (or decreases a negatively selected characteristic) to fall into disrepair in competition over genetic data storage room with other, emerging networks. A solution to this is when mutations that in sets of one or two have dramatic effects come along. If they have more effects than an entire network of small effect genes, it will decrease the evolutionary cost of losing a gene network. In this way, the evolution of greater organism level complexity goes hand in hand with genetic simplification, a greater reliance on genetic mutations with dramatic effects. The fact that chimpanzees have more functional genes than humans may be an example of this kind of pseudogenization in the human line. Instead of still being part of complex networks, the known small effect genes may be remnants of deceased gene networks!

Examples of mutations with dramatic effects

Harmful mutations are not the only examples of mutations with dramatic effects. There are multiple examples of one or two mutations with effects on brain development being taken in their human varieties, inserted in mice, and being shown to have dramatic effects.

FOXP2; speech and other things

One example is FOXP2, a gene that accounts for much of the differences in signal transfer in the synapses between different organisms. It is sometimes referred to as the "speech gene", though it does have many effects besides speech too. The gene, in one variety or another, is found in all vertebrates, and total loss of the entire gene is lethal. There are two mutations separating the human variety of the gene from the chimpanzee variety, and three separating the human version from the mouse version. When it was discovered that another mutation of the gene in humans severely damaged brain function (the KE family), it was initially assumed that huge numbers of other genes had to coevolve with the gene for earlier, more constructive changes of it to have any noticeable effect. When mice were gene-edited to have the human variety of the gene, however, the modified mice showed dramatic changes in their vocalizations and visibly enhanced learning ability. They did not start to speak like humans, but then, their small brains as well as their small vocal tracts at a completely different pitch range are good explanations for that. Still, three point mutations had a dramatic effect on their behavior!

SRGAP2, synapses and what slow growth does and does not mean

The SRGAP2 gene is also an example. While other mammals have one copy of the gene, humans have four, due to a small number of gene duplications. When these extra copies were inserted into mice, they slowed down the development of connections between brain cells. This also made the brain cells keep growing connections for a longer time, the end result being about 60% more connections.

The SRGAP2 change that slowed down the growth of brain cell connections (but created more of them in the end) flourished in the first mutants with them (otherwise they would have been eliminated by natural selection), without a social system that could possibly have coevolved with an alleged greater need for parental care. This falsifies the claim that human intelligence went hand in hand with much more helplessness in young individuals. It also falsifies the claim that a more complex social environment was needed to stimulate the changed brain circuits and make them useful. There may, for example, have been a transitional slight increase in the risk of young individuals eating something poisonous, which was more than compensated by other advantages. As for social interaction, computer simulations show that ape social systems can be emulated by simple reactions without intelligence.

Genes and evolution versus gender, what does emotional maturity mean?

It is often claimed that while girls reach physical maturity faster than boys, boys reach emotional maturity faster than girls. It is also often claimed that humans evolved slower physical maturation than other primates as an adaptation to slower emotional maturation. Are these claims even compatible? The gender claims amount to saying that emotional maturity is, in principle, decoupled from physical maturity. If there is no system level inextricable link between physical maturity and emotional maturity, there is no stopping genes from affecting one but not the other, directly at the individual gene carrier level and regardless of gender. And then there is no stopping natural selection from selecting on gene varieties and turning the individual differences into population differences. The result would be evolution of population and species differences with the same decoupling as is claimed as a gender difference.

That is, the claim of decoupling of physical and emotional maturity regarding gender is incompatible with the claim of physical and emotional maturity being linked regarding ape to human evolution.

Evolution and timing of maturity

If it takes "emotionally mature" females to raise infants with a proper chance of surviving to adulthood, there is no way evolution could select for burning nutrients on biological sexual maturity prior to that level of "emotional" maturity. The nutrients lost through menstruation if left alone, as well as the perils of pregnancy in the case of very poor chances of the infant surviving, would strongly select against any such thing. Therefore the claim that "girls are biologically mature before they are emotionally mature" is ignorant of evolution.

In the case of monogamous couples in which the male served an important role in the survival of the infant, an analogous evolutionary infeasibility of "biology first, emotion later" applies to males as well.

Psychologists also claim that developing brains are more expensive in nutrients than "mature" brains. Such high a nutrient expense of the brain coinciding first with the nutrient costs of genital development and later with the increased nutrient cost of mature genitals compared to the lower nutrient costs of immature genitals would generate a transient period of very high overall nutrient costs. It would mean a vulnerability to temporary nutrient deprivation, and in individuals with the successfully reproducing part of their lives in front of them and therefore not in the type of selective shadow that permits aging-related diseases such as dementia at that. That is something that would be strongly selected against by the unstable food supply that all of our ancestors faced prior to agriculture, when they were hunter-gatherers.

Facial recognition and socializing with non-kin

It is claimed by psychologists that brains go impulsive as a Stone Age adaptation to move to another group and form social bonds with non-relatives during the teenage years, but that facial recognition develops in one's twenties. Since psychology also claims that recognizing others is very important in social interaction, such a mismatch in time would never have been permitted by evolution. The possibility that increased facial recognition may be due to a vision-centered flaw in the tests that ignore other types of recognition such as voices should be considered. After all, the ability to hear some high-pitch sounds usually disappears at about age 20. Facial recognition may be a learned compensation similar to blind people relying more on hearing. The apparent difference between recognizing faces and recognizing places may simply be due to the fact that places usually do not have as distinctive high pitch sound profiles as do human voices, and that most types of television and Internet streaming (which is the depictions of distant places that most people have access to) do not speak up the high pitch sounds that people over 20 usually cannot hear.

If this model is correct, cases of prosopagnosia (lack of facial recognition) that do not come with retardation or general brain damage should be linked to ageless hearing ability, "mature" adults with the same high-pitch hearing as teenagers who never had to compensate by recognizing faces instead of voices. Young people with mild hearing impairment should also display facial recognition at a level considered "mature" by psychology.

As a further note on facial recognition, Amazonian squirting fish (known for their technique of hunting insects by shooting a water ray that makes insects fall into the water) can recognize faces, including human faces. This is documented by their actual jetting of water on faces that they have seen before, not on dubious FMRI scans. Given that they, like all fish, lack the part of the brain that is claimed by psychologists and many primatologists to be responsible for facial recognition, it does have relevance as a criticism of the claim of a domain-specific mechanism for recognizing faces.

In Artificial Intelligence research, general learning neural networks have shown their ability to learn faces with no domain-specific programming for facial recognition.

What is a horrible experience?

Are the lists of what types of experiences psychology and psychiatry acknowledge as traumatic really falsifiable? In other words, are psychology's and psychiatry's claims of what is traumatic and what is not traumatic free from immunization to criticism? If, for example, a person feels traumatized by a number of events when his or her parents decided things that they were legally allowed to decide when he or she was 17 and that the parents told him or her that they decided and he/she could not refuse. Would there be any way to falsify psychology's and psychiatry's claims that the person is either "only whoring for attention" or "was traumatized by something else" (read something illegal)? As far as I know, psychology's and psychiatry's claims are immunized to criticism and therefore unscientific. If anyone knows something that could falsify these claims, please say it!

There are also cases of degree. For example, imagine a person who was involuntarily committed by psychiatry, forcibly medicated (in accordance with psychiatry's rules) and also raped by one of the psychiatrists at the institution (illegally). If the person feels very traumatized by the legal forcible medication but only mildly traumatized by the illegal rape, would there be anything that could falsify psychiatry's claim (most likely uttered during investigations afterwards) that it was the illegal rape that was the main trauma? Or is psychiatry's claim simply infalsifiable bull?

Psychiatry claims that the statement that anything can be traumatic is a "myth", but the truth is that individuals are individuals. We cannot know by generalization. See also universal fallibility.

It is also possible that the statistics that appears to show that interpersonal violence (such as rape and attempted murder) is more likely to cause trauma than natural disasters may be due to a skewed culture in psychiatry and its values in society. It is possible that people who feel traumatized by natural disasters are forced by psychiatry's assumption of interpersonal violence victim privilege to suffer in silence. That victims of natural disasters are terrorized with "you cannot compare your experience to what victims of interpersonal violence have suffered, that would be an offensive comparison". That pressure may silence so many natural disaster victims, it creates an illusion of natural disasters being "better" than interpersonal violence.

The filed charges effect and desperate framing

When it is about people who, for some reason, are not free to choose whether or not they want to be treated in a particular way by certain other people, those who find a legal treatment horrible are generally not taken seriously by authorities. This may effectively force such people to file charges about something illegal instead, as they see no other way to stop the for them horrible but by society accepted treatment that they are subject to. I remember the horrors that legal guardianship legally put me through when I was a "minor" and I fully understand that some are desperate enough to frame people with power over them for something illegal, such as physical beatings or sexual crimes that did not actually happen, especially in the case of those who either have or think that they have a chance of ending their horrors by framing its direct representatives. As fewer charges to choose from means that the desperate framers need to rely more on the few crimes that are acknowledged, framing may explain why more people are convicted for sex crimes against minors in jurisdictions where corporal punishment is legal than in the jurisdictions where corporal punishment is illegal. Apart from legal guardianship, compulsory school may also act as a legal abuse that mentally harms some. There are examples of this not only for people under the age of majority, but also for some people over that age, especially if under involuntary commitment by psychiatry. In the case of psychiatry, being forced to take "medications" can be an example of legal abuse that people are desperate to find a way out of. This applies no matter if the people are locked into closed wards in psychiatry or if they are merely threatened with it as means of forcing them to take medications that psychiatrists that visit them force onto them.

Apart from the desperate people filing charges themselves, it is also possible for people who hear about how bad the desperate people feel to assume that there “must” have been illegal abuse. These external assumers may then file charges, even if the desperate people themselves file no charges whatsoever. That is, simple denial of the possibility that people may feel traumatized by society’s legal treatment of them forming self-fulfilling prophecies in the statistics.

This effect of filed charges means that mental harm that was actually done by society's legal forms of abuse is filed officially as "caused" by illegal forms of abuse. This acts as a self-fulfilling prophecy that gives false support to society's generalizing claims of what is horrible and what is not. When combined with comparophobia on the lines of "if you compare legal A to illegal B, you are offending victims of illegal B", it leads to further pressure (as if the threat of going to jail for "false framing" was not bad enough) for victims of legal abuse to remain silent about the horrors that society legally inflicted on them.

It is time to break this vicious circle with absolute respect for the fact that ONLY the individual can know where the shoe pinches! Perhaps a first step may be to inform people about statutes of limitation for framing, so that at least some of those who were desperate to become free from legal abuse and framed their legal abusers for something illegal dare speak up about their framing, desperation and what they really felt oppressed by. Information that makes it easy for individuals to know whether or not they are free to speak by being able to calculate how the statute of limitation for the act of framing is influenced by how young they were when they framed, without having to tell anyone about it before they know, should be included into the information campaign. The Internet would be useful for information about that, like it is useful for information about many things.

Autism and self-fulfilling prophecies

It is sometimes claimed that since analysis of connections in the brains of six months old infants can predict autism with 96% accuracy, that "proves" an innate defect in a putative specialized social brain module. That claim ignores many things however, including the possibility that small hereditary differences in movements and perception of some sensory information may lead to some children being ostracized by their peers. Simple social exclusion of children with some hereditary characteristics that are in themselves not related to social behavior leaving them out of the learning of the social codes for children of their age in that culture. Later in life, "warning bells" based on their lack of those codes may lead to further exclusion from later codes and so on. Also, attempts to "help" people within the confines of the assumption that there is a "deficit" in them may force them into remaining in that role. Changing society away from snobbish assumptions that social rules "must" be unspoken might work, on the other hand.

It was initially assumed that the overrepresentation of diagnosed autism in Somalian children in Sweden was due to vitamin D deficit during pregnancy caused by their dark-skinned mothers living at a low sunlight latitude. However, the overrepresentation was most marked in cases of recently immigrated parents and decreased if the parents had been in Sweden for many years. That was contrary to the vitamin D hypothesis, according to which the deficit would increase over time as vitamin D stores in the body dwindled. A similar spike in autism diagnoses have also been observed in Norwegian immigrants in the United States in the 1940s, for whom lack of vitamin D was never an issue.

Apes, imitation and human influence

It is often claimed that humans are uniquely social in the sense that human children imitate the entire process associated with getting food, even the irrelevant steps, while apes only imitate the steps that are necessary to get food. However, the studies have been done on apes in captivity, and their exposure to humans may have influenced them towards less "know others as you know yourself" than in wild apes. A stronger version of the effect seen in humans who have recently migrated.

In the wild, there are many examples of ape traditions that persist over centuries and even over millennia, often in the form of cultural differences that are arbitrary and not related to differences in food supply, parasites or material availability. There are also more ape traditions in areas with little human influence, such as the chimpanzee culture in Goualougo.

Problems with the concept of psychopathy

The brain, the pleasure system and empathy

Psychiatrists claim that the common trait in the brain for all psychopaths, criminal or non-criminal, is a higher than average reliance on the brain's pleasure system. However, psychologists also claim that empathy and altruism is driven by the brain's pleasure system rewarding the act of helping others without individual gain. This means that the psychiatric/psychological model of the brain predicts that psychopaths, by being the most reliant on the brain's pleasure system, should be the most empathetic and altruistic people of all. Quite the opposite of what is both officially and popularly claimed. Whether psychopaths are claimed to be totally non-empathic or just less empathetic is beside the point. Psychology/psychiatry's brain model predicts that psychopaths should be more empathic than others. It has nothing to do with the "gradual versus black and white" question of whether or not someone is "non-empathetic" at all.

Evolutionary issues of mating behaviors

Psychiatry's claims about differences between psychopathic and non-psychopathic reproductive strategies also ignore evolution. Psychiatry claims that while most non-psychopathic heterosexual men are most sexually attracted to females age 25-32 (the age range with the highest de facto fertility as measured in ability to produce surviving babies), psychopathic ones more often prefer females in the 16-24 age range (in which the higher amount of blood circulation causes the oxygen saturation in the placenta to exceed the ideal 33%, increasing the risk of miscarriage). Psychiatry also claims that while non-psychopaths are wired to form long-term relationships, psychopaths are wired to be promiscuous and to change partner often. That effectively adds up to psychiatry claiming that the long-term monogamous reproductive strategy favors bonding with those who are currently at peak fertility while the promiscuous reproductive strategy favors mating with pre-peak ones. That makes no evolutionary sense at all. In reality, the most genetically efficient strategy for long-term monogamy is to include the entire fertile part of life by bonding with pre-peak fertility types. It is the promiscuous reproductive strategy that favors preference for those at the peak of their fertility, as those have the highest current likelihood of passing on genes.

Also, for a male to kill or severely physically damage the female he is mating with prevents her from carrying a baby with his genes to term, which is counterproductive as a reproductive strategy. It would be far more sociobiologically believable for females to evolve a behavior of killing their male partners after fucking, as some animals (especially insects and spiders) do. While it is possible for kin selection to produce non-reproducing individuals that helps their relatives, which have been suggested as an evolution model of homosexuality as well as of sterile insect workers, it is not transferable to a behavior of harming without helping as psychiatry claim that "psychopaths" do.

Of mallards and people

When the discovery of homosexual necrophilia in mallards got an IG Nobel Prize, it was said that the mallard did not know that the other duck was dead, and was therefore not perverse. It was stated that only humans are aware of death. However, psychology considers death awareness to be a higher cognitive function, and it also considers sexuality to be separated from cognitive functions (such as awareness of death) by an encapsulated wall of domain specificity (i.e. the claim that intelligence does not affect sexuality). To consider necrophilia in humans a paraphilia is therefore incompatible with psychology's assumption that brains are modular and that cognition does not shape sexuality.

While it is not in itself flawed to argue that it takes a functioning brain to express consent, that does not change the fact that it takes higher cognition to know the concept of consent. And that is exactly what psychology's and psychiatry's belief in "domain specificity" claims to be encapsulated away from "emotion".

Can psychopathy be measured?

There are multiple issues with tests of "psychopathic" traits. For example, the tests rate both "lack of empathy" and "breaking the law" as "psychopathic" traits as if they were equivalent. Then what happens when law tells people to behave in non-empathic ways? A crime is a breach of something that is written in the law, and there are many examples of breaking the law by helping or not hurting others. If you give away knowledge of how to copy patented medicines to poor people who need it, you break the law. If you refuse to kill people at war, you are breaking the law.

In all of the cases above, it is the illegal "empathy" that helps the individual people that you have in front of you. While it is theoretically possible to argue that breaking the law may have long term negative consequences for other people that hurt more people in the long run, using such a definition of "empathy" is incompatible with psychiatry's assumption that valuing individual help above all is "empathy" while so-called "utilitarian" help of the many is "non-empathy". The legalistic argument relies on long term effects on the many, not on direct effects on the few that you are helping! As utilitarianism actually means "the greatest happiness for the greatest number" but is often misapplied to completely different ways of helping the many (such as saving lives, which is decoupled from happiness and suffering), the word "utilitarian" is here placed in quote marks when it is used in its misapplied sense.

One item on the list of "psychopathic" traits, sexual promiscuity, cannot be applied to virgins. If you are a virgin, how do you answer the question "are you promiscuous"? To point out that the question of sexual promiscuity exists only in the questionnaire for adults and not in precursor psychopathic traits tests misses the point that there are virgins over the age at which psychiatry draws the line between adolescent precursors of psychopathy tests and adult psychopathy tests. The psychiatric age is also 19 in some countries and 24 in other countries, as if rate of brain development depended on the jurisdiction in which one was living. It is true that the existence of rapid natural selection in cases where individual genetic variation is already present makes the biological model theoretically compatible with the notion that brain development rate may be influenced by what age restrictions existed in the jurisdictions in which one's relatively recent ancestors had been living for many generations (whether or not there is any empirical support for it is a different question altogether). However, the claim that genes shaping brain development change depending on the jurisdiction in which one is currently living is totally bull. And apart from virgins, single people who have had sex a few times but still struggle to find a partner and rarely get the opportunity to have sex also fall in a "not applicable" category to the questionnaire item of sexual promiscuity in psychopathy tests.

Tests of psychopathic traits also have large steps on each question, usually only three steps, one more than in a yes or no question. Empathy, criminality, promiscuity, tendency towards boredom and so on are all measured in a small number of steps, usually three each. That forces many people to choose between two alternatives when they cannot decide which is the closest. This may be part of the reason why the same person can be rated as a psychopath on some tests and as a non-psychopath at other tests. These tests are used not only popularly, but also "professionally" including forensic psychiatrists who decide whom they classify as "dangerous".

Empathy vs Machiavellism, how could humanity evolve?

Psychiatry is classifying both Machiavellism (intrigue for power, lying and deception) and "lack of empathy" as psychopathic traits. That is, psychiatry assumes a negative empathy versus Machiavellism link, in which more Machiavellian individuals display less empathy. However, psychology and ethology also both claim that humans are more empathic than apes and that humans are more Machiavellian than apes. If there was indeed some kind of pleiotropism creating an evolutionarily fixed Machiavellism versus empathy scale, as psychiatry claims, that would have precluded any species from both evolving more empathy and more Machiavellism than related species.

Evolution decrees that differences between species must be at a continuum with individual variation. Individual variation is the material from which new species are built by evolution. So there is no point in trying to explain it away by saying that humans and other apes are different species.

Cruelty to animals, evolution of empathy, necessary vs sufficient conditions

Psychology and much of ethology claims that empathy in mammals evolved step by step, in extensions from an initial empathy that was restricted to close kin towards including more and more distant kin. That is, empathy with close relatives being a necessary condition, but not a sufficient condition, for empathy towards more distant relatives or non-kin. However, at the same time psychiatry and parts of criminology claim that all humans who are cruel to animals are also less empathetic to other humans, whereas some people who are violent towards other humans may display high levels of empathy to non-human animals. Well, other humans are closer relatives for humans than nonhuman animals are! And now, empathy with more distant relatives become a necessary but not a sufficient condition for empathy with closer relations?! That means that psychology/psychiatry's views constitute a total reversal of what is a necessary but not a sufficient condition for what. Evolutionary continuity would never permit a reversal like that.

This means that psychology's and psychiatry's views on empathy are completely flawed and as such, they are pseudoscientific.

Issues with the inhibition cop-out model of animal-human violence link

It would, in theory, be possible to argue that a link between cruelty to animals and violence toward other humans could be due to an underlying lack of inhibitions. That would circumvent the problem with contradiction of its own premise of evolutionary order of empathy and necessary vs. sufficient conditions. However, since an underlying mechanism would cause both, it would still fail to account for the apparent one-way link, that it only appears to make cruelty to animals a predictor of human violence and not human violence a predictor of cruelty to animals.

There are also fundamental evolutionary problems with the whole idea of inhibitions as a biologically distinct system of brain mechanisms, as shown by the nutrient thrift issue of saving unnecessary brain mechanisms by lowering the extent of an impulse instead of evolving another mechanism that also cost nutrients to control it.

Further incompatibilities with provider monogamy (hunting)

The claim that humans somehow transitioned from what was an empathy extended from closer kin to more distant kin to a fundamentally different empathy scale where empathy with other species became necessary but not sufficient for empathy with kin becomes even more ridiculous when it is in a combo with the claim that humans evolved monogamy for men to care for their families by providing meat that they hunted. That would imply that men evolved both more empathy (for their relatives) and more cruelty to the animals that they hunted (no place for mercy with the prey if you are supposed to feed your family). That, if anything, would require a strong empathy with close kin that can coexist with cruelty to animals in the same individual. And if females of many carnivorous species that evolved carnivory independently combine parental care with cruel hunting techniques, and that the variety of species include animals with different chromosomes and different sex determination systems, some of which are hormonally incompatible, why would males be excluded universally? No reason for it.

Capri octopuses and non-cannibalism

It is often said that laws and rules created by empaths force psychopaths and sociopaths to obey and play with the rules, and that society would collapse without "innate" empathy. That claim cannot explain the society created by the Capri octopus. They are a group of Octopus vulgaris, usually a completely solitary animal in which the mother dies from starvation before her eggs hatch. They have no evolutionary history of living in groups or taking care of each other, even the mother simply lack appetite for food when she guards her eggs and the eggs neither look nor smell like hatched octopuses. Otherwise, they are cannibals. In most places, any adult octopus that is not full or without appetite would eat a hatchling octopus. This applies no matter if the adult octopus is male or female.

However, one group of Octopus vulgaris near Capri, Italy have started living in groups. Not only do the Capri octopi learn by imitation (which also refutes the doctrine of a specialized mechanism for observational learning and thus the psychiatric concept of autism), but the adult octopi also do not cannibalize on the young octopi. The change happened suddenly in an entire geographical area (ruling out genetic change as an explanation) and have persisted even though many hatchlings from outside the group have drifted in and spread their genes there. And there is not one shred of evidence that mammals punish the octopus if they eat young octopuses!

The fact that the octopi at Capri abandoned cannibalism without punishment enforced by "naturally" social animals such as mammals do have implications for humans. It means that the assumption that some people are born to only care about themselves and can only be kept in check by laws and enforcement from people born with "innate empathy" is simply wrong.

Artificial selection, frontal lobe genes by jurisdiction and age

Psychiatry says that there is a low degree of correlation between psychopathic traits in young people and those in mature adults due to late frontal lobe development. However, this model also predict the existence of genetically-based differences in the rate at which frontal lobes develop. And where there is individual genetic variation and diverging selective pressure, group differences form. There are, and have been for a long time, differences between jurisdictions in age restrictions. In psychiatric theory, that would have selected for different rates of frontal lobe development between people in different jurisdictions. So where is the evidence for differences in the distribution of crime between people of different ages linked to what the law said in the jurisdictions from which one's biological ancestors came? Why don't we see studies that show, say, that people whose genetic heritage is from jurisdictions with lower ages of criminal responsibility have less correlation between age and crime rate than those with genetic ancestry from jurisdictions with higher ages of criminal responsibility?

Is modern psychiatry different from historical abuse?

It is often claimed that modern psychiatry is different from political abuse of psychiatry in the former Soviet Union, Nazi Germany and other similar regimes. However, stripping away the semantic differences, the content of the Nazi concept of "degenerate people" is very close to modern psychiatry's concept of "empathy deficit". Psychiatry's claim that people who do not believe in psychiatric diagnoses are "psychotic" is analogous to the former Soviet claim that there was something medically wrong in the brains of people who did not believe in Marxism (the extensive use of drugs in Soviet psychiatry show that molecular models of behavior were generally accepted there, as opposed to the claim in some literature that allege communist motives for anti-psychiatry that the Soviet communists only believed in social causes of behavior). It is the same immunization to criticism, lack of falsifiability. A claim that pathologizes its critics is not scientific.

When the Swedish institute for racial biology was decommissioned, many of those who worked there were employed as psychiatrists and remained in that profession until they reached retirement age. They were not fired for their background in racial biology. The institute is known for having influenced Nazi racial ideas. When the Swedish former racial biologists turned psychiatrists retired, Sweden started importing psychiatrists from the former Soviet Union and other former East Bloc states. There is a high percentage of psychiatrists who studied psychiatry in Eastern Europe during the Communist era active in Swedish psychiatry today. These East European psychiatrists are among the most prestigious psychiatrists in Sweden today, more so than psychiatrists who were originally from Sweden. It is interesting that retiring psychiatrists with Nazi ties were so happy to leave their positions to foreign psychiatrists with Communist connections and non-Nordic ethnicity. Perhaps that understanding had something to do with the fact that both ideologies are based in immunization to criticism by means of pathologizing the critical thinkers.

Even the United States imported Communist-educated psychiatrists from the Soviet Union and other East Bloc countries, that import of psychiatrists was ongoing even during the height of Red Scare. While the imported psychiatrists were forced to pledge loyalty to economic liberalism, the other aspects of their Communist education (including the practice of assuming interest motifs behind criticism, dismissing the rational arguments of criticism, and pathologizing the alleged motives) were welcomed in the United States. Their denouncing of economic Communism did not make them any less totalitarian, just as China remain totalitarian under economical liberalization.

And it is not over in Sweden or elsewhere. Psychiatry classifying people as "psychotic" for criticizing psychiatric diagnoses still happens today. They even make up bogus FMRI scans with the same flawed methodology that "discovered" brain activity on dead salmons that "prove" that people who do not trust psychiatry lack some brain functions. An evolutionary question of that claim; how could a brain mechanism specifically for trusting professional psychiatrists evolve before the psychiatric profession even existed?

Psychiatry's and pro-psychiatry's practice of claiming that the bad things about psychiatry are due to "individual quacks" and that the system as such is good is also in close mirroring of the Soviet practice of purging and disgracing individual members of the Party while silencing all criticism of the Party as such. The similarity is likely a result of the many imported Soviet psychiatrists who shaped much of the organization of modern Western psychiatry as well as its public relationship methods.

Why did not Neanderthal DNA destroy our brains? Flower analogy

It is often claimed that human brain development is extremely complex and relies on vast numbers of genes that only slowly evolved. Those claims underpin a variety of other claims, including the claim that genetic engineering is likely to destroy and not improve humanity, the claim that young people are mentally immature and the claim that humanity evolved very slowly without thresholds or breaks. The claim, however, predicts that human brain development should be extremely sensitive to admixture of other DNA. It is just like the fact that how sensitive the smell of flowers are to admixture of DNA from other related flower species that smell differently is related to the number of genes involved in the smell. Flowers with smells determined by two or three genes can take high amounts of admixture and many of the hybrids will smell like purebred flowers. However, if hundreds of genes are involved in the flower's smell, even a tiny amount of admixture will destroy the smell of the flowers. In the latter case, even a fraction of a percent admixture will make it virtually impossible that a single flower will ever smell anything like a purebred flower again (the odds approaching those of randomly picking the surface of a habitable planet out of all places in the Universe if many of the "derived" mutations are recessive).

Non-African humans have 2 to 4 percent Neanderthal DNA. The anatomical methods that are used to determine maturity say that Neanderthals "were never teenagers" in the biological and neurological sense. It is the same methods that are used as arguments for psychiatry's view of extremely "complex" human emotions, so criticism of them is criticism of psychiatry. There are even differences in brain case anatomy that are interpreted by psychiatry's brain foci models as a difference in emotional and communicative development between sapiens and neanderthal as early as in toddlers.

So if the development of human brains is as complex as psychiatry claims, why have not the small percentage of Neanderthal DNA messed up emotional development in non-African humans and rendered them incapable of creating civilization? If "normal" brain development relied on a long list of necessary but not sufficient genetic changes, a tiny percentage of foreign DNA would put wedges in the line and render anything that evolved as a later enhancement of the function that was knocked out useless. That would make it extremely slow, if not impossible, for evolution to recover any but the earliest stages of the evolutionary complexification that had taken place after the split between the two species.

Issues with argument from culture

The uselessness of criminal statistics

There are many ways in which cultural assumptions of correlations between different behaviors can act as self-fulfilling prophecies in criminal conviction statistics. One possibility is that people who are assumed to be "dangerous" are avoided, and forced to lie and commit various crimes to survive such as stealing when their economy suffers from being denied work or when shop owners refuse to serve them. Or simply have to steal to replace expensive things sabotaged by haters. It is also possible that being treated as "unforgivable" gives a nothing to lose effect and/or makes people angry at a society that hates them. It is also possible that criminal investigators are selectively overzealous towards people they consider "suspect", which leads to more crime being revealed than for people not considered "suspect". It is also possible that courts convict people they consider "likely to commit crime" more often, making self-fulfilling prophecies in conviction statistics in which the false claims confirm themselves. That, in turn, gives the courts false statistical "evidence" of some people being more likely to break laws (whether it is laws in general or particular categories of laws), which they then use as circumstantial "evidence" and create even more convicting sentences... and so on. Ergo, statistical correlations in criminal conviction statistics are useless as arguments for shared underlying brain mechanisms!

The uselessness of crime statistics is based on self-fulfilling prophecies, which applies equally no matter the social status of those who are considered more likely to commit crimes. For example, the claim that overrepresentation of crime in men is different from overrepresentation in poor people or in people of color because men are privileged ignores the fact that self-fulfilling prophecies are decoupled from the status of the subjects of self-fulfilling prophecies. Such prophecies work as automatic machineries without relation to the economy of those alleged to be criminals, unless the alleged offenders are rich enough to afford elite lawyers which only the 1% can anyway. And the 1% is too small to have any statistically significant impact on crime statistics in the population anyway.

Harsher punishment for the same crime can also be based on a belief that some people are more likely to reoffend than others, which draws on the same self-fulfilling prophecy as the difference in crime statistics.

Prejudice shaping recidivism without being presented for the court

Even if the psychiatric assessments that are used on convicted criminals are not presented for the court in cases of reoffense trials, it can still shape the pipeline to recidivism trials by being used to watch some convicted criminals more closely than others. If forensic psychiatrists believe that one set of convicted criminals are more likely to reoffend than another set of convicted criminals, the simple fact that the more suspected subset of criminals are more closely watched may cause them to be convicted again more often than the less suspected set, even if they commit the exact same number of new crimes. And since crime statistics on the paper is based on the convicted ones and not on those who get away with their crimes, it becomes a self-fulfilling prophecy with apparent evidence for false psychiatric theories.

Police budget effects and police prejudice

Differences in what crimes are brought to court in the first place are often founded in what crimes are investigated by the police and what crimes are not. The police has a certain budget and needs to make priorities. This means that they choose to proceed with investigating the complaints that they deem the most likely to be correct. And in that way, the police believing that some people are more likely than others to commit crimes creates a bias in which those considered more likely to commit crimes are more often investigated, and therefore less likely to get away by the complaints being ignored out of budget-based priority. This bias is in effect even if the distinctions are not used in courts, even if use in court is prohibited. They become a self-fulfilling prophecy before court.

One example is that police assumptions of impulsive men and empathic women leading them to prioritize complaints filed against men over complaints filed against women. This bias can easily come from biologistic allegations (as opposed to any kind of feminist ideology), making the entire framing of "biological sex difference versus patriarchal structure" debates misframed. The claim that feminism causes polie to ignore complaints about female criminals ignores the distinction between internal police work and what the police is allowed to say. Just as restrictions on the police expressing statistics on immigrants in crime does not stop them from arresting immigrants and investigating their crimes, there is no reason to assume that any ethics against expressing statistics on female criminals would prevent them from arresting women and investigating their crimes. In a society that applies double standards in the definitions of racism and sexism that effectively consider claims about sex differences more acceptable than equivalent claims about ethnic differences, it is absurd to think that a police taboo would impact investigations of crimes committed by a people of a particular gender more than investigations of crimes committed by those of a specific ethnicity. It is equally absurd to think that a patriarchal structure would cause police to arrest and investigate more men than women. It is all explainable by the hypothesis that self-fulfilling prophecies in police work are decoupled from economic privilege.

Since investigating a murder forensically costs much more money than arresting a marijuana smoker on the street, and other similar examples, decriminalizing victimless crimes cannot significantly mitigate the budget problem for crimes with victims.

Forensic psychiatric assessments that contradict their own premise

One example of a psychiatric forensic assessment method for the alleged likelihood of committing new crimes in general that also contradicts its own premise is brain scans for psychopathy. The forensic psychiatrists claim either that psychopaths feel less emotion in general or that psychopaths feel less spontaneous emotion. However, either would also mean less desire for "egoistic" gain, less motivation for stealing and committing violence just as much as it would mean less motivation to help others. The claim that psychopaths feel less disgust than other people is against the claim that psychopaths are more violent, as disgust can lead to assault and murder. A similar thing goes for passion murder if psychopaths are said to be less emotionally attached and therefore less likely to experience strong romantic jealousy. The desire to help others can also motivate stealing to afford helping others, or economic scams to get the money to afford helping others. As psychopaths are said to have less connected "centers" in the brain, how do the psychiatrists explain societies that are built by insects that have "brains" consisting of multiple very poorly connected nerve knots? Why does not insect societies collapse under egoism and breach of social rules? Why do the workers care for the queen, eggs and larvae? And in the case of psychopaths only having less spontaneous emotion, that would make it easier to obey the law and not break it out of emotional impulse.

There are studies that claim to show that people can recognize whether or not a man is a criminal just by looking at his face, but not what type of criminal he is. However, one such study also claim that only men recognize rapists as criminals while women mistake rapists for non-criminals. Such a difference is incompatible with the claim that the same appearance applies to all criminals. The supposed "explanation", that only men who look non-criminal to women can get the chance to rape them, ignore the existence of rapists who either ambush their victims or rape victims who are heavily drunk or otherwise less than fully conscious, both of which make the woman's ability to recognize the man as a criminal irrelevant for his "success rate". It also ignores homosexual rape. There is also the evolutionary issue that if hereditary characteristics both made men criminal and altered their appearance in predictable and linked ways, there would have been some geographical groups of humans that would have killed, castrated or otherwise eliminated such men from their gene pools (artificial selection in culture, somewhat comparable to the natural selection on lactose tolerance). So why are there no crime-free genographic groups of humans?

One contradiction in a forensic psychiatric claim of assessing a specific type of crime is that penile pletysmography or phallometry is said by statistics to be more accurate to predict if a male child molester will reoffend against boys than it is for predicting if he will reoffend against girls, while vaginal pletysmography is said to be virtually useless to predict if a female pedophile will reoffend or not. If heterosexual men are said to be somewhat more masculine than homosexual men, it contradicts the claim that the success of assessing men's erotic age preference by penile pletysmography is somewhat less masculine for heterosexual men than for homosexual men. If a method is fairly accurate for men but very inaccurate for women, it would be the most accurate for the most masculine men and somewhat less accurate for more feminine men. So the biological explanation of such statistics fail in the context of the feminine gay men model (there is a model of hypermasculine homosexual men but it has problems explaining the apparently higher fertility in women with gay male relatives in the context of evolutionary compromises between males and females). Self-selection bias based on shaming is likely to keep arousal non-concordant men who are only sexually interested in adults but display age nondiscriminating penile arousal from volunteering to the control groups, a bias that is likely stronger for gay men due to cultural stereotypes depicting gay men as pedophiles. Apparent reoffending correlation is explainable by those who train to suppress their erections being better at hiding their crimes, making them less likely to get caught again even if they reoffend to the same extent as those with "inappropriate" erection.

The hypothesis that erection is a proxy for sexual arousal taking place in the brain cannot explain why penile pletysmography appears to be a more accurate predictor of recidivism than brain scans, as adding another step of transfer (such as signal transfer from the brain to the penis) can only increase and not decrease error rates. The hypothesis that training erection control by meditative means is a proxy for a sex offender who works hard to hide his crimes can explain it however, as the expensive devices for training to control one's brain waves (which are used by some paralyzed people) are nowhere near as accessible or widespread as the opportunity to check one's own penis for free during training. The hypothesis that sex crimes are caused by a desire for particular physical types cannot explain why phallometric studies are statistically almost as accurate predictors of recidivism in hebephilic sex offenders as in pedophilic ones, as there are fairly large numbers of people over the age of 18 who look as if they had only recently reached puberty but very few 18+ people who look completely prepubescent which means that the visual desire hypothesis predicts a much greater difference in recidivism prediction accuracy. The small difference in accuracy that is observed falls within the range of predictions by the hypothesis that the correlation between recidivism and erection is an artifact of offenders who train erection control being better at hiding crimes, as a slight amount of visual age confusion during such training is expected from people accustomed to different fashion styles, ethnicities and, in some "masculine" or "feminine" occupations, average hormone levels.

The hypothesis that the use of tests for involuntary responses as conditions for parole after severe crimes, provided that knowledge of one's involuntary response does not require very expensive equipment with low accessibility, will lead to training of response control specifically in skilled hiders of crime and create an illusion of the test working as recidivism predictors can be generalized. This includes as of yet hypothetical tests. For example, if measurement of heart rate during exposure to stimuli that psychopaths are said to be resilient to (such as foul odors or imagery of mutilated faces) was instated as a condition for parole of convicted murderers in jurisdictions where the sentence for murder is less than life term (at least for first time offense), the hypothesis predicts that selective training of heart rate control (which is documented from certain types of meditation) in murderers who train to conceal their crimes would lead to those best suited to hide future crimes passing the test as "non-psychopathic", which would cause the test to give a false appearance of being a successful predictor of recidivism in murder. As long as brain scans that could be used to observe one's brain waves to know what one is training control of remain very expensive and accessible only to a few people, the hypothesis predicts that heart rate tests would appear to be a stronger predictor of recidivism than brain scans.

Psychology's incompatible claims about gender in empathy and crime

It is often claimed that higher conviction statistics for males than for females "confirm" that men are biologically more aggressive than women. Psychology claims that it is linked to women having the caring role and men having the fighting and protective role. However, psychology also claims that empathy automatically leads to aggression towards whatever threatens the ones you feel empathy with. That claim is incompatible with the claim of a role division between caring and fighting above. This also ties into psychology's contradictory claims about oxytocin and the role of oxytocin. If oxytocin leads to more ingroup empathy but also more aggression towards outgroup, and women have more oxytocin than men, why would men engage in more violence to other groups than women?

As with all incompatibilities, the fact that one claim is wrong does not prove that the other claim is correct. Both claims can be wrong.

Willingness to take conditions for parole as an error source in treatments

Studies on treatments that are used as a condition for parole often misinterpret lower recidivism in treated criminals compared to nonvolunteers as evidence that the treatment works. They ignore the error source that willingness to undergo certain procedures for parole are proxies for strong motivation to be free, those willing to go to great lengths to get parole are likely to be highly motivated to avoid going to prison again. For example, the observation that reoffending is dramatically lower in castrated sex offenders (male) than in those who do not accept castration is explainable as a result of extreme motivation for freedom being required in almost all cases to be wiling to give up one's testicles for parole.

It is also not certain that these paroled offenders stop committing crimes, they may simply train better techniques to hide their crimes.

Wanting drastic procedures - an error source in voluntary precrime statistics

It is also possible for people who want to undergo restricted medical procedures that are granted to some categories of criminals to malinger an impulse to commit such crimes to get access to the procedure. For example, some men who want to be castrated for completely different reasons can malinger paraphilias and claim to have extreme difficulty resisting their claimed impulses to commit sex crimes. While the consequences of having such psychiatric records make it very unlikely that a health freak who think castration is good for health but do not believe in an afterlife would practice such malingering, people who do believe in an afterlife have a completely different premise of consequences and are willing to mess up their current lives if they think it can improve the afterlife. This makes religious faiths that claim that castration promotes spiritual enlightenment a likely source of such malingering men. This hypothesis is falsifiable in that background checks should reveal an overrepresentation of pro castration spiritual beliefs and an underrepresentation of naturalists and atheists among men who apply for castration by claiming to be paraphilic.

Depending on how much suffering they are willing to endure for their beliefs, some may be willing to get sentenced and go to prison no matter if they are willing to commit abuse or not, it is possible that some claim responsibility for either real crimes that others have committed or alleged crimes such as those planted as false memories by psychoanalysts. Given that people with strong spiritual beliefs often train meditative control of responses that are usually considered involuntary, these men may fake phallometric responses. This may explain why a high percentage of sex offenders appear to respond primarily or exclusively to inappropriate stimuli without contradicting the hypothesis that the appearance of most men's genital response being category specific and concordant is a result of volunteering bias. Maybe it is time to investigate the sources of money that the psychoanalysts who planted false sex abuse memories got? May the money be from spiritualist groups who promote castration and want crimes to take responsibility for?

That means that there are incentives for some men to claim to have uncontrollable impulses to commit sexual crimes even if they do not. Both the fact that removal of the ovaries are not promoted by anywhere near as many religious groups as castration of men and that ovaridectomy is not offered as a treatment to female sex offenders create a difference in sociological motives for religiously motivated malingering. This gives a non-biologistic explanation of why inappropriate sexual impulses appear to be not only more common but also harder to control in men than in women. After all, the nonhuman animal species in which castration make the males cease their sexual behavior also display a halt to female sexual behavior after removal of the ovaries, so why the claim that evolutionary continuity would transfer such an effect to human males but not to human females?

What is a culture anyway?

Arguing that a behavior that is prevalent in some cultures must be biological pathology in people from a culture that condemns it is flawed reasoning. It assumes cultures to be unified blocks at the same time as it assumes them to be composed of miscellaneous ideas that are not linked by logical conclusions from one premise. In the case of miscellaneous ideas, it is possible for people to pick up ideas from all over the place, including the Internet. Assuming cultures to be unified blocks is especially ridiculous self-contradiction when it comes from people who claim to oppose xenophobic assumptions of fixed and unchanging cultures. Family+school=/=everything non-innate.

An idea or observation that one have randomly encountered at one point may give inspiration to possible explanations of later observations or to seeing contradictions in claims that one hear. This makes cultural influence intractable in chaos theory's sense. Such random intractable influences may lead to conclusions that are not shared by others (that would be classified as "delusional" by psychiatrists using the same mainstream simplistic definition of "culture" that is contentually indistinguishable from that used in xenophobic definitions of "culture") and novel moral values (that would be classified as "sociopathy" or worse by the same simplistic misconception about culture) without any "innate pathology" or pre-set "agenda". The claim that a belief that is "mentally normal" for someone from one "culture" is "delusional" in a person from another "culture" is also unscientific cultural essentialism that assumes cultures to be isolated blocks and denies the importance of individual chance encounters with different ideas that cannot be boxed in a "same or different culture" dichotomy.

In the case of logical conclusions from one premise, the most controversial conclusions need in no way be the most important for the person who promotes them. They may simply come with the logical package. The right to apply premises consistently for the sake of consistency is part of the freedom of expression that is needed for the existence of science.

Do cultural universals prove anything?

Even if a behavior exists in all cultures that have been studied by anthropologists, it still does not prove hardwiring. It is possible for practical inventions to be created independently by different people to solve common practical problems. For example, pyramids are just buildings with wide bases and narrow tops, which is a simple way to build something big that stands with low technology.

In the case of arbitrary and even stupid behaviors, it is important to remember that not only have all cultures that have been studied been in contact with at least the anthropologists, but the anthropologists are also slowed down by ethics. The latter means that it is very likely that some others arrived before the anthropologists. Since "isolated" tribes are surrounded by culturally globalized people who physically look like the "isolated" tribe, people who have never seen anyone who look like the anthropologists may still be culturally influenced from the outside. This may give an illusion of "universality" in stupid social behaviors that make no survival nor evolutionary sense.

Ape/human difference, civilization creating and the myth of altruism

Some psychologists including Tomasello claim that humans have an unique sense of "fair" distribution of the fruits of cooperation. These psychologists claim that this allowed humans to create civilizations while apes lack the allegedly necessary sense of fairness. However, the history of human civilizations show that early civilization was not built on equal sharing between those who cooperated, but on coercion forced onto people by rulers such as forced labor (in some cases slavery proper in which people were traded as property, in other cases compulsory labor similar to military drafts though not restricted to fighting but also including things such as building, both of which generally left the richest people exempt). If the difference in the ability to create civilizations between humans and other apes were due to humans having a sense of fairness, apes would have created more stable civilizations while humans would have been more barbaric due to riots demanding equal share.

Ice Age Homo sapiens built no civilizations, lack of Neanderthal civilization prove nothing

It is sometimes claimed that since Homo sapiens have created civilizations and Neanderthals did not, Homo sapiens must have been inhetently more social than Neanderthals. That is a flawed claim however. First, assuming that one observation proves one particular explanation is a case of the verificationist fallacy. Also, during the Ice Age when Neanderthals lived, Homo sapiens did not build civilizations either. If climate change that improved the possibilities of agriculture was important for the formation of civilization, the lack of civilizations built by a human species that died out before that climate change has no relevance to the question of innate ability to build civilizations.

Debunking arguments from shared poverty

It is sometimes claimed that since many people in poor countries never commit the kind of crimes that are associated with poor ghetto areas in the West, there "must" be something wrong with the people in poor suburbs in the West themselves. However, that ignores the fact that a "good" society often does bad things to some people. For an example that is relevant in this case, police and authorities knocking down "illegal" buildings that are used by people to live in is an environmental stressor that makes life worse for the people in question than it would have been without such stress, even if the income level is the same. Suddenly being without a roof over the head forces poor people in the West to look for somewhere to sleep (and adds the extra economic stress of replacing items that were lost to the police), while people in poor countries could have used that time to work for a bit of money or food instead.

Absurd criteria distinctions in psychiatric diagnoses

There are examples of psychiatry making arbitrary distinctions in their definitions of some mental illnesses. Often these distinctions are binary and ignore different variabilities.

Gender and sex

Some "mental illness" are defined in one way for men and in another way for women in the official manual DSM-V. That distinction ignores the evidence against sex being binary, such as the fact that hardly any brain is "all male" or "all female" and that some brains are impossible to sex determine. The fact that there is a 90% individual overlap in the average brain structural differences between men and women is comparable to the 85% individual overlap of DNA average differences between people from different parts of the world, there is an even stronger case against the concept of gendered brains than against the concept of human races. The DSM-V, being decided by votes, ignore this and it is no surprise that a manual that define paraphilias differently depending on gender also consider transsexuality to be a mental illness (transphobia is an inevitable result of believing that individual combinations of both characteristics that are considered normal for men and characteristics considered normal for women can be pathological merely for being combined in the same brain). And penis fixation is not the only example, there are others too.

In some cases, women have been labelled as "psychotic" merely for drawing conclusions that follow from things that psychiatry officially claim about gender differences and men. For example, women who have listened to the claim that testosterone makes men more violent than women and then have been in accidents and lost blood have been declared "psychotic" for reqursting only donor blood from women out of fear that donated blood from men would testosterone-poison them and make them violent. Another example: psychiatry claims that only women have evolved the ability to recognize unappealing sexual situations as sex acts as a preparation for being raped, while men can only recognize an act as sexual if they are interested in taking part in it. As a result, by thinking of the fact that police officers are capable of investigating sexual crimes, some women who have been exposed to those claims fear sex crimes from police officers such as policemen molesting her offspring. And psychiatry deems such women "psychotic" even though they only draw the conclusions from what psychiatry teaches!

Is the speed of test performance relevant for "innateness"?

It is often claimed that when test subjects perform a task faster than should be expected from a psychological straw man depiction of conscious intelligence, that is "evidence" of an innate domain-specific brain mechanism for that particular task. That claim, however, assumes that consciousness is a separate "module" and ignore the possibility of it being a connective function that goes fast when connectivity across the brain works fast. It is possible that brain-wide communication can speed up or slow down.

It is not even certain that there must be an innate trigger. Brains can change, and there is no evolutionary reason to assume that learned responses are slower than innate responses. On the contrary, there have been many cases of old types of carnivores dying out and being replaced by other carnivores in Africa. That lent evolutionary advantages to the hominins that could modify their response to danger to fit new threats without slowing their responses down. Stone Age humans also survived among carnivores not like anything that ever existed in Africa, such as Australian Aborigines surviving in Australia's fauna. The fact that there are often large differences in hunting techniques between different types of carnivores, so that a behavior that is protective from some carnivorous animals increase the risk of being killed by others, show the importance of this adaptability.

What is the point in IQ tests?

It is often claimed that IQ tests measure a general intellectual ability in humans. It claims evolution as an "argument" for intelligence being on a gradual scale, at the same time as it claims IQ to be applicable to humans only and not to other animals! The claim that IQ applies only to humans, with a "species"-based demarcation, contradicts the claim that its intelligence gradualism is supported by evolution. Proper measures such as neuron count, number of synapses per neuron, signal efficiency in the synapses and, in the case of large brains, long distance connectivity across the entire brain, can and do apply to brains in general without species restrictions.

Missing the ability to disprove theories

The scores in IQ tests are de facto set by how well the test subject's answers agree with the answers written by the creators of the test. This means that one gets no points for proving that the official answers are wrong. So IQ tests do not value the ability to falsify theories that allows science to progress and the creation of technological civilizations. Beings that are inherently incapable of doing science and cannot create technological civilizations can therefore pass as geniuses on IQ tests!

The ability to criticize theories that this refers to should not be confused with creativity as psychology understands it. Psychology claims that critical thinking impairs imagination and creativity. That is an effect of psychology's tradition to confuse criticism of factual errors with conventionalist attacks on anything new. True criticism of factual errors go by the content, not by whether the claim that is criticized is new or conventional. If you measure the total number of ideas, criticism does reduce the number. But it increases the quality of the remaining ideas, a few ideas that work instead of many ideas that do not work. There are no authority-based short cuts to this, no heuristics that can do the job, it takes criticism of the content.

Is the idiocracy hypothesis idiotic?

The idiocracy hypothesis (sometimes called the idiocracy theory) holds that human intelligence have been declining for the civilized millennia after agriculture was invented due to stupid individuals having more children. The hypothesis states that stupid individuals in larger groups can use techniques that they could not invent themselves. That makes the hypothesis incompatible with the claim that intelligence evolved for life in large groups. If the latter claim was correct, larger groups enabled by agriculture would select for more, not less, intelligence.

Even without claiming that intelligence evolved for big groups, is the imitation of techniques really capable of selecting against intelligence? Simply imitating or taking others on their word grants no ability to distinguish useful information from disinformation, nor does it allow you to adapt when changes of the environment make old knowledge useless. Climate change is known to require at least as much behavioral change for survival in farmers as it does in hunter-gatherers. Farming civilizations are historically known to be subject to survivals of those that modify their farming to fit climate change and the demise of those that refuse to change. A high percentage of degenerate conventionalist minds in a population would increase the risk of the ruler being in that category, and thus the risk that his or her decisions would lead the entire civilization in ruin when the climate does change.

The idiocracy model also holds that intelligent people are less likely to procreate either by being more picky about mates or by being more likely to use contraception. For the claim that intelligent people are more picky about mates, there are very small-brained animals, even insects, that are very picky about mates. There are also intelligent people who are promiscuous. For the claim that intelligent people are more likely to use contraceptives, it have not been true for the greater part of the time that agriculture existed. Contraception was always out of the question in traditional farmer communities, and can thus not have caused smaller brains in early sedentary farmers and other pre-industrial civilizations than in hunter-gatherers.

The flawed assumption that time is the limiting factor

IQ tests are built upon the assumption that it is time that is the limiting factor, that a less capable brain only needs more time than a more capable brain but eventually gets the same tasks done. This ignores the fact that different brains have different maximum precision of their distinctions. Ivan Pavlov did not, contrary to a common myth, use simple bells. He used more accurate equipment such as metronomes that could be set to tick at precise rates. What he found was that while dogs that had been given food when they heard the metronome tick salivated no matter the rate at which it ticked, human subjects salivated only when it ticked at the same rate as when they had been given food. Also, 8-9 year olds were somewhat less precise in their distinctions than adult humans but still much more precise than dogs.

Given that the preciseness of the distinctions is a limiting factor, it is likely that there are conceptual distinctions that some brains can make while other brains cannot regardless of how much time they have. Such distinctions can be tested by presenting subjects with many examples to determine whether or not their classifications are distinct from random chance. It is similar to testing for color blindness by showing a person many red and green objects to see if the person takes the right ones out of something other than random guesses, in other words color vision. Only in this case it is a limiting factor in the brain instead, and the distinctions are about abstract concepts and not colors. IQ tests do not do that, so they miss it.

Of course it would be necessary to rule out any use of outer authority. Just as a colorblind can use the words red and green about objects that others say are red in the case of some objects and green for others, so can a retard use the terms reductio ad absurdum and straw man about phenomena that others call reductio ad absurdums in the case of some phenomena and straw men in other cases. Non-intelligent entities could pass the tests if they used mode of publication (such as peer review versus non-peer review) or conclusions (including political classifications thereof) as proxies for the distinction. What it takes is an independent distinction of what follows and what does not follow from the premise in question. What should be tested is the ability to independently make the distinctions. It is also possible to test whether such a distinction emerge at a critical threshold of an underlying Pavlovian ability to distinguish "simple" cues such as ticking rates.

The concept that the distinctions brains make are a limiting factor independent of time can be compared to the fact that eyes have limits to how small objects they can see without a microscope. You cannot see a bacterium with your naked eye no matter how much time you have to look at it. If observation time is not the limiting factor in how small things you can see with your eyes, why would time to think be the limiting factor as to how precise distinctions your brain can make? Is anyone seriously claiming that a chimpanzee can distinguish a reductio ad absurdum from a straw man and only need more time than a human to do so? Note that a ratio of correct to wrong classifications that can be explained by random chance amounts to a failed test.

Flock validation bias in "prediction" of scores in one IQ test by another

It is often claimed that just because scores on one official IQ test can closely predict scores on another official IQ test, it "proves" that IQ is a valid measure of intelligence. That claim, however, ignores the problem with bias by flock validation. If another test is evaluated by whether or not its results agree with those of other tests, there is a systematic bias that clusters tests that give similar results. There would be the same "validation" if the initial official tests were lists of questions about football victories in the 1980s, that would have led to the comparisons classifying other tests that also asked questions about football victories in the 1980s as "valid measures of intellectual ability".

Are IQ tests rigged to look gradual?

Many forms of retardation, such as Down's syndrome, are caused by chromosome defects. An extra chromosome or an added chunk of a chromosome is all or none, and much more likely than a large number of individual genes independently duplicating. It is not that the likelihood of an extra copy decreases as the number of genes increase. Chromosome defects are thus major changes that do not fit into a continuous bell curve. So why do IQ tests appear to show a bell curve in IQ?

One possible explanation of an apparent bell curve of IQ is that there is an institutional tradition to assume that intelligence "must" be gradual, causing questions to be added or removed when discontinuities are apparent. This allows only the tests that do show a bell curve to remain. This leads psychology to ignore not only obvious flaws in the gradualist model such as chromosomes, but also the possibility of critical thresholds for emergent phenomena.

IQ tests do not measure the intelligence of people. IQ tests measure the stupidity of psychology's methodology, observing the results of its own flaws and not what intelligence actually is.

Pathologizing criticism

To assume that a person who criticizes a H/T must be "mentally ill" is an example of what Karl Popper referred to as immunization to criticism, which makes the claim non-falsifiable and therefore unscientific. An example of this is when psychologists, psychiatrists and the degenerate form of neurologists that "map areas" of brains (all three sharing the same fallacies) pathologize people for not feeling like they have free will, at the same time as they pathologize people for believing that free will is genuinely real. Not to mention all the shit they throw at people who point out that societies can exist without belief in free will!

And of course, the claim that people who disagree with a psychiatrist's label of "mental illness" is "psychotic" constitutes immunization to criticism and is therefore unscientific. That psychiatry have got rules and procedures is to no help: where is the falsifiability of said rules? Which are the falsifiable predictions of the claim that "behavior X is normal while behavior A is pathological"?

Assuming motifs as a totalitarian infalsifiability

Same goes for assuming motifs, regardless of whether the alleged motifs for a view are referred to as "normal" or as pathological. The distinction becomes especially irrelevant when the "normal" categorization still involves some form of ostracism with similarities to pathologization. For criticism to be possible, the right to consistently apply premises for the sake of consistency is all-important. Freedom of expression means the right to speak for all opinions that follow logically from consistent application of the premise. The act of directly or indirectly demanding inconsistency in any way, including isolating specific opinions and making assumptions about a specific psychological or ideological agenda behind that opinion (instead of the correct act of taking consistency for what it is) is totally unacceptable oppression of thought.

One of the most common types of such totalitarian assumptions in Sweden is to assume that people who criticize animal protection inspectors have been convicted for cruelty to animals and banned from owning animals themselves. Apart from being an obviously false generalization (for one of many examples of other grounds for criticizing animal protection inspection, the philosophical notion that humans cannot know how much suffering a speechless animal is feeling and that putting an animal to death "humanely" can kill a life that the animal finds worth living and only superficially appears to be miserable to some subjective exterior observers), it also constitutes immunization to criticism by assuming "agendas" behind criticism. Such assumptions of "motifs" can immunize any claim to criticism, making it impossible to move from cultural whim to something better. In fact, immunization to criticism makes it impossible to gain knowledge about what is arbitrary cultural whim and what has a deeper meaning. Neither claiming that "I was born that way" nor claiming that "I was raised that way in my culture" can distinguish what does good from what does harm. Universal criticism is necessary to know anything. No exceptions.

For example, imagine that person A writes a H/T article in which he/she states that "the H/T that the permanent brain damage related to use of narcotics is decoupled from the high effect itself predicts that purified narcotics without contaminant molecules should not cause brain damage". Imagine, then, that person B challenges that reductio ad absurdum by saying that "it is possible for some daughter molecules from the breaking down of the original narcotic molecule to cause permanent brain damage, even if the exact molecule that gives the high effect causes no permanent brain damage in itself". Imagine, then, that person A invokes "cognitive bias" theory and claims that person B is a drug addict (without any empirical evidence of person B using narcotics) and therefore attacks a theory of how drugs damage the brain out of a desire to legalize narcotics (even though person B have never expressed the view that narcotics should be legalized). Person A have, in that case, committed the pseudoscientific act of immunization to criticism.

Even when a person expresses the view that something should be legalized and/or is morally acceptable, that still is not a valid ground for assuming that the person want to do the act himself/herself. Much of it is general freedom of expression considerations, as being pathologized and demonized is a violation of one's freedom of expression even if one is not formally punished.

When I read a joke article that claimed that Peter Singer was a zoophile merely because he considers bestiality to be morally acceptable, my reaction was to be really angry at the joke for violating the freedom of expression by means of psychoanalytical assumptions. I reacted that way even though I disagree with Peter Singer's view of mirror test ability being sufficient to be capable of consent (I am a fallibilist-pansophontist, an intelligentism stressing the importance of the ability to do reductio ad absurdum in order to correct what others fallibly assume about one's welfare, which is very different from Peter Singer's utilitarism and essentially infallibilist view of empathy). Imagine that person C argues that the large differences in age of consent between countries, combined with the fact that changing a country's laws is more difficult than emigrating, means that there is no reason to assume people who want to radically decrease the age of consent to be pedophiles. What person C does is a falsifiable and therefore scientific sociological analysis. To assume any kind of psychological or ideological "agenda" behind person C's analysis is therefore antiscientific.

Even false claims may not be psychoanalyzed

Whether or not a claim is true is decoupled from the principle that assuming motivations is wrong. For example, there are anti-capitalists and anti-industrialists who think global warming is a hoax created to divert attention from more serious forms of pollution. These people are an example of global warming denial that is not related to capitalist interest. The fact that spectroscopic analysis shows that CO2 and other greenhouse gases, which increase due to human activities, do trap IR radiation and warms Earth is a valid criticism of global warming denial, but it is not a valid argument for assuming agendas behind global warming denial. There is a qualitative difference between criticism and "psychoanalysis". Criticism is scientific, to assume motifs is unscientific.

Cognitive bias theories do the job of conspiracy theories

In fact, a cognitive bias theory can do the denial job of a conspiracy theory without the conspiracy itself. It is possible, within cognitive bias nonsense, to deny global warming by claiming that all evidence for it is the result of humans being biologically psychologically biased to believe in global warming. Since it constitutes immunization to criticism it is not scientific, but it is no worse than any other psychological motif assumption. The contradiction in being human while alleging the viewpoint that one is rejecting to be panhuman is the same too. It would also be possible within cognitive bias nonsense to deny the Holocaust by alleging a panhuman cognitive bias to believe in the Holocaust, and to say that it is not anti-Semitic because it involves no Jewish conspiracy but only a cognitive bias shared by all humans. The same cognitive bias psychology can also be used to deny the moon landing without invoking a deliberate hoax, by claiming that all humans are biologically hardwired to believe that the moon landing was real. If the psychologists can claim that all humans are hardwired to believe in God, nothing stops them from also claiming that all people are wired to believe in the moon landing. From a scientific (not cognitive bias psychology) point there is a distinction in that there is evidence for the moon landing but no evidence for the existence of God. However, since cognitive bias psychology can make up claims of cognitive bias compromising any evidence, it is immunized to any distinctions based on evidence.

I once read a debate on an Internet forum in which one user (whom I call user D) wrote that it was wrong to criticize people who blindly obey authority at work because they "cannot help" their stupidity. Another person (whom I call user E) wrote that anyone capable of questioning has the right to do so, and not have to tiptoe around blindly obedient stupidity. However, user E also assumed that user D had been attacked for his or her (alleged) blindly obedient stupidity at work. I posted a reply in which I wrote that while user E had many valid points (to which I added the dangers of blind and stupid obedience, i.e. the banality of evil) and that I sharply disagreed with user D's viewpoints, I also wrote that it was wrong of user E to assume motifs behind user D's views. Again, the distinction between acceptable criticism and unacceptable "psychoanalysis". And again, I rejected assumption of psychological motifs even when they were used by someone I otherwise agreed with against someone I sharply disagreed with. The consistency of principle.

Since pathologization (and demonization) of criticism is non-falsifiable, it can be used against literally anything. Therefore, there is no such thing as rejecting pathologization of criticism "with reasonable exceptions". You either categorically reject all assumptions of psychological motifs behind opinions, or you effectively support a psychological totalitarian dictatorship that can arbitrarily suppress any view it pleases. A society where psychological analysis and assumptions deters people from expressing their views is a society without freedom of expression, psychological totalitarianism. All assumptions of psychological motifs behind views therefore fall under the category of intolerable intolerance. There are no exceptions. Claiming that freedom of expression includes your right to assume that someone who wants to legalize infanticidal cannibalism wants to practice it him/herself is like claiming that freedom of expression includes a Christian priest's right to say that all Christians who do not want to kill all Muslims are possessed by demons and will go to Hell. Criticizing views is okay, though. The difference between mild criticism and extremely harsh criticism is merely quantitative, non-absolute. The difference between criticism and assuming psychological motifs is a qualitative, absolute difference.

The poverty of postmodernism

Correction of fallible assumptions is the universal key of intelligence. It is relevant in many ways. Since you cannot know a priori whether or not something you do for an individual's own good actually harms that individual (universal fallibility), the ability to correct what others assume about one's welfare is morally relevant too. Universal fallibility means this form of antistupidism cannot be compared to arbitrary lists such as IQ tests, which commits "the Godbook fallacy" when assuming a list of answers to be the gold standard of intelligence. Since identity is fallible, one can make false assumptions on what form of being one is oneself, fallibility is universal to all entities and not unique to human flesh and blood. Referring to alleged infallibility of, say, a deity also fails. One can never know if one is omniscient, even if omniscience existed!

It is time to de-postmodernistify Earth somewhat like the allies de-Nazified Germany but more efficiently. This time, it is important not to leave any seeds of postmodernistoid oppression in existence. It was a big mistake to let essentially the same empathy scale based on belief in a specialized "social module" that underpinned Nazi racism to persist under the cosmetic change of going from Nordic cold to African savanna and from explicit racial profiling to perpetuating indirect discrimination by word censorship that calls anyone who say that it is discrimination racist. Distorting the action against "just obeying orders" from genuine prevention by general anti-bureaucracy to a non-preventive doctrine of viewing punishment as a goal in itself was also a huge mistake.

It is very dangerous to feel offended by comparisons of one's actions to other examples (e.g. to Medieval torture "for the heretic's own soul's good") or by comparisons of other things to what one have experienced (e.g. "you have not experienced the particular thing I have experienced, so do not compare x to it") or to "respect" such comparophobia as it leads to similarity-blindness and repetition of history under cosmetic changes. It is important to crack down on all continued existence of such behavior (not punishing past such behavior just for the sake of punishment) for its consequences, not for its intentions. Democracy is an illusion as long as there is fear of being psychologically analyzed.

Issues with punishment

Punishment selecting against capacity of intent

If our transitional ape/human ancestors had specifically punished individuals with whatever "human" characteristics are said to be the basis for moral and criminal responsibility while not punishing more apelike individuals for the same actions, that would have led to said "human" characteristics being selectively bred out of the population and only the nonhuman characteristics being passed on. Survival of the impune. Or, since degrees of punishment can select for lower degrees of intentional ability, survival of the impunest. It does not have to be black and white for it to work.

Reward hacks - being forced to pretend stupidity

It is also possible that specific punishment of intentional actions forces people to pretend that their actions were not intentional, which keep harmful behaviors going on. An application in humans of what is known to Artificial Intelligence experimentalists as a reward hack. This is an example (but, per the principle of not assuming motifs, certainly not a necessary one) of the importance of not assuming that someone who criticizes claims on the lines of "it is good that person X was punished for that crime" or "I hate that person X got away with that crime" in any way accepts the action itself. There must be freedom to express general criticism of punishment without any risk of being claimed to "defend" the actions one criticizes punishment of. No matter what crime and in what context, NEVER assume that someone who criticizes punishment "defends" the action! EVER!

This applies to everything from everyday questions on the lines of "did you do that on purpose or was it a mistake?" to criminal courts asking the question "sane or insane?". What society's specific condemnation of intentional "evil" does is to create a stupidity norm. Superficially "pro-intelligent" values in the choice of words such as feeling offended by words such as "stupid" or "retard" only refer to just that, choice of words. Since society still does make people feel "offended" (or maybe scared of punishment or social ostracism) by the notion that they are fully capable of changing their behavior, it does not change the fact that under the semantic veneer society is stupidonormative. With such stupidity norms around, apparently stupid behavior require no biological stupidity to be explained. Claims such as people being made stupid by gut bacteria or claims of humans being "incompletely evolved" that share ideas with the notion of young people being stupid for biological reasons are redundant and should be picked away by Occam's razor. It is time to fight stupidonormativity instead!

Does the difference between free will and the ability to change behavior matter?

There are pointmissing claims that "we do not have free will but we still have mental capabilities that make us morally responsible for our actions". The missed point is that any characteristics that make their carriers into subjects of punishment for which individuals without the characteristics would have been unpunished are selected against. Whether or not the characteristics constitute "free will", or even whether or not they are mental abilities at all, is irrelevant. A population of wolves in which punishment is aimed primarily or exclusively at green-eyed individuals would select against the genetic basis for green eyes.

Claiming that the criticism above "assumes black and white differences" and denying that it applies to gradual change is also a pointmisser. Negative selection is negative selection, regardless of whether the factor structure is strictly gradualist, strictly all-or-none, a critical threshold at an objectively fixed amount of one or more gradualized underlying factor(s), or an arbitrarily constructed amount limit for one or more gradualized characteristic(s). If there were many genes involved in different degrees of green eye color among wolves, a population that selectively punished individuals with more green eyes would evolve towards less green eyes than other populations. So much for the combination of claims of humans having a sense of specifically individuals with developed human mental characteristics being punishable and/or condemnable for their actions AND of humans having a higher degree of said characteristics than other biological organisms.

The claim that the negative selection "assumes that Homo sapiens already had evolved" because "only Homo sapiens recognizes the difference between human behavior and non-human behavior" ignores the problem of evolutionary transition. According to neo-Darwinian and Darwinian evolution, species do not spring into existence as sharply demarcated entities out of nothing. In any gradual transition from archaic Homo to Homo sapiens, any specific characteristics would have to exist to some extent before the population was "fully modern" by later standards. And the principle of negative selection applies as much to flux populations that do not firmly belong as undeniably in either "species" as to anyone and anything else. In strict gradualism, someextentish punishment of higher-degree awareness must have existed before full modernity. If "human mental ability" appeared at a critical threshold as an emergent phenomenon in the brain, it would have been a disaster for those individuals and for the underlying genes if they punished each other but spared individuals below the threshold. If such a distinction formed as a result of a population being dominated by sophont individuals, non-sapiens genes from later interbreeding with other Homo types such as Neanderthals and Denisovans would have been positively selected by impunity (adaptive introgression), wiping out intelligence.

Different populations and gene flow

It have been claimed that the possibility of gene flow between populations in which selection worked in different directions could invalidate the principles of negative selection. However, given that mixing would produce mixed individuals and with that an individual variation within the populations, it would give selection more raw material to select locally on. High degrees of interbreeding could temporarily form a significant number of maladapted individuals, but either ceased interbreeding, the extinction of the other populations or the unidirection of selection in the populations would lead to maladaptations being eliminated in few generations. Optimization would be helped by diversity from earlier interbreeding giving natural selection much to take from.

In any case, even with significant ongoing interbreeding, there would be less of a characteristic in a population where it is negatively selected than in one where it is positively selected. So no Homo sapiens at the same time being smarter than other hominids with which they interbreed and having a sense of punishment specifically of intentional actions the other hominids lack. And if intention-punishment became "culturally universal" as psychology claim that it is today, that would have eliminated higher intelligence everywhere and go very fast due to the high degree of genetic variation to draw from.

Can punishment without death or decreased reproduction be generalized?

It have been claimed that since not all punishment cause death or a decrease in reproduction, evolution would not select for impunity. However, there are also many punishments that cause death or decreased procreation. Examples of punishments that decrease reproduction include gender apartheid in prisons and sexual risk orders from courts. In some jurisdictions, the category may also include sterilization. Mobs driven by a culture of viewing consciously chosen "cruelty" as worse than unintended bad consequences may select against intelligence outside the formal criminal punishment system. In the case of ape-to-human transitional ancestors, not only did getting killed by a mob cause death, but "milder" punishment such as loss of rank caused both decreased access to food and water and decreased mating opportunities. Since the phenomenon of more severe punishment for intentional actions, as far as we know, have never been confined exclusively to punishments with no effect on either survival or reproduction (and is extremely unlikely to have ever been, given that unintended consequences of punishment are decoupled from intentions behind punishment), the conclusion that selective punishment of premeditated acts selects against the ability to premeditate one's actions holds.

Even if a system in which the severity difference between punishment for intentionally "cruel" actions and unintended causation of negative effects had no effect on survival or reproduction was to be created today or in the future, it would not change the facts about ape to human evolution in the past (though it could possibly prevent present or future biological degeneration into non-sapience). By retaining a selective condemnation of intentional actions and punishment thereof, it also would not remedy the societal pressure to pretend lack of intention, ignorance, stupidity, inability and so on. It have been claimed that the threat of punishment of intentional bad actions could be good by making people more careful about what they do, but that claim overlooks the fundamental flaw in specifically judging intention. If "carefulness" is driven by fear of being punished or condemned for having done something purposefully and with knowledge, it becomes a very unsound kind of "care" full of pressure into pretending stupidity and have one's actions classified into an "unintentional" category. Total eradication of all selective condemnation of intentional actions is therefore necessary. Never let any examples of specific actions mislead you away from that, always remember the categorical distinction: to think that a phenomenon is bad and should stop is NOT the same thing as demanding punishment of those who intentionally did part of the phenomenon! On the contrary, the more you care about stopping a phenomenon, the more you should oppose something as counterproductive as specific punishment of intentional choice!

Evolution and the existence of science

The entire assumption that rational arguments are afterconstructs to justify preconceptions is incompatible with the existence of science. If the beings that are said to "work that way" are the same beings that are said to do science, that is. It would be theoretically possible for one set of beings to work like cognitive bias psychology claim that humans work, while another set of beings capable of science did science. It would be possible for the latter to do experiments on the former, just as it is possible to use lab rats for scientific experiments even though rat brains are not capable of using the scientific method themselves.

How the beings that worked the way bias psychology claim that humans work could possibly have evolved is another matter, though. A critical consciousness that evaluates past actions and interpretations of the world in a way that enhances the ability to learn from error to something more efficient than trial and error could well have the evolutionary advantages to more than compensate for the higher nutrient expenditure in the brain. However, a "consciousness" for justifying preconceptions would not have these advantages. Instead, those justifications would stop simpler learning mechanisms from doing their trial and error job. It would be one nutrient-consuming mechanism with only one function: to prevent another nutrient-consuming mechanism from doing productive work without reducing its nutrient price. It would be two nutrient-consuming mechanisms getting no learning job done. A much, much smaller brain of simple reactions could have done the same inflexible behavior at a much, much cheaper nutrient price tag.

If huge numbers of distinct psychiatric diagnoses were genuinely valid, that would imply that brains were made of many specialized mechanisms that could fail. That would mean that all brains, including "normal" brains, would lack the generality needed for science. All brains would then lack sophonthood. It is, however, only massive modularity that is incompatible with the existence of science, which makes many specific diagnoses psychiatry irreconcilable with the existence of the scientific method. Science's existence does not rule out the existence of general retardation, general dementia and loss of sapience due to brain damage, it does not rule out the existence of pre-sapient kids, and it does not rule out evolution of sapient brains from pre-sapient beasts. Science existing requires the existence of brains that have reached a general ability to rule out false assumptions in the present, as opposed to merely working in a specialized way stuck in preconceptions, but it does not require brains to always have been that way.

The importance of adaptability to change of the environment

Adaptation and migration are not mutually exclusive

It is sometimes claimed that just as many animals migrate when the environment changes today, so would our ancestors have migrated rather than adapting when the climate changed by tipping points and jerks between glacials and interglacials. However, there is no denying that some of these changes shrunk the African savanna to much smaller sizes during its lows than it was during its highs. Those living in border zones changing into something else, it may have been jungle, woodland or desert, would face extremely severe competition if they tried to migrate to what was left of the savanna. Hominins adapted to other environments would face the same when the climate did swing the other way. Since those in the expanding environment would take time to procreate and fill it, especially when some tribes wanted to stay where they were and blocked the way for those behind them, there would have been plenty of advantages for those adapting to a changing environment. This explains why humans adapted to many environments across the world rather than remaining confined to the savanna.

Nor are necessity of migration and necessity of flexibility mutually exclusive. Just because migration is a necessary condition for surviving a catastrophic climate change does not mean that it is a sufficient condition. If you migrate from an area that was turned into a total wasteland by abrupt climate change to an area where food and drinkable water can still be found, that does not mean that the new place you move to is a near-copy of what your old habitat used to be like. The differences can still be such that a radical change of behavior patterns is necessary for survival, such as water availability changing on a different seasonal cycle, the carnivores that can eat you using very different hunting techniques against which your old means of protection are worse than useless, and the food with nutrients that you need looking superficially like something that was lethal poison in your former environment.

It is sometimes said that since not all animals under those climate changes evolved bigger brains, the climate change cannot have caused bigger brains to evolve in humans either. That claim, however, miss the point that there are necessary but insufficient conditions for adapting to some environments. If there are already anatomical features that physically allow adaptation to a new environment and intelligence is the bottleneck for adaptation, there will be much greater potential for selection for intelligence than there would be if there were also many anatomical hurdles against adapting to that environment. These anatomical features, like climate change, are necessary and insufficient conditions for evolution of the brain. For example, swines with their snouts adapted for digging are anatomically capable of adapting to more changes in the environment than they would have been without those snouts. And so the swines evolved bigger brains during climate change while other related ungulates did not do so. It is just that the free hands that humans have allow for even more adaptations than a pig's snout for digging does. Free hands were a necessary and insufficient condition for the evolution of human intelligence, as the pig's snout was a necessary but insufficient condition for the pig brain evolution.

While much of the environment changed, there were some "pockets" of the old environment remaining. That explains why there are specialist species still around today (many plants and animals). So while most hominids did adapt to a changing environment, some remained in refugial habitats and did not have to adapt. This may explain hominids with small brains in recent times like Homo naledi, they were refugial hominids.

Surviving sudden change - too fast to migrate, and water

When you are faced with sudden climate change, you often have to survive by quickly finding food and especially water in the changed environment. You do not always have the time to migrate elsewhere (also, when the climate changes, old knowledge about seasonal migration cannot be used to know where to go, nor did prehistoric people have Internet to check where their familiar habitat may have persisted). What is the point evolutionarily in trying to reach a remnant of your known environment a 10 day walk away when you will die from dehydration in 3 or 4 days? Obviously the ability to come up with new ways to find the necessities of life in a changed environment, where they would not be expected to be found under the old knowledge, was an evolutionary advantage.

Even with our big and nutrient-hungry sapiens brains, dehydration will kill you faster than starvation. While big brains consume your nutrients faster, they do not dehydrate you faster. So in the face of abrupt changes of the environment, the enhanced ability to come up with ways to find water very fast was worth a shorter deadline to find food.

The importance of life and death flexibility

This also means that the claim of cognitive biases resisting change in the case of behaviors that are important for survival is evolutionarily nonsense. It is precisely for life and death decisions that abrupt climate change selects for adaptability. Behavioral changes that are not of life and death can not evolutionarily compensate for the nutrient costs of bigger brains. The fact that humans are not the only animals with some ability to adapt to changing environments, and that learning by trial and error can save your life in some environmental changes, the same selection for adaptability that makes intelligence useful would select against any misfiring intellect that justifies biases and thus prevents the more basic learning from trial and error from doing its (already somewhat nutrient hungry) job in a functioning way.

Minds capable of falsifying hypotheses in a scientific way (even without writing or any explicit formulation of the scientific method) are the most adaptable to change, animalistic trial and error learning is a mid rank, and the kind of brains cognitive bias psychology believe humans to have (if they existed) would have been the bottom rank of survival value. Language and telling stories about past climates may offer an incremental advantage over simple trial and error and imitation if the climate change is cyclic and the old climate will come back later (that is already superior to chimpanzee groups losing their skills to hunt certain monkeys when they are too rare for new chimpanzees to see other chimpanzees hunt them). Later, the capacity for falsification of hypotheses by means of reductio ad absurdum offers yet an adaptability advantage over that, allowing our (then fully human) ancestors to let their false beliefs die in their place instead of dying with them and adapting to unprecedented changes.

It have been claimed that since the emergence of larger brains than before happened slightly later than the climate changes and not during them, that would preclude adaptability to environmental change from being the driving force in the evolution of the brain. However, there are multiple flaws in that "reasoning". One is that the direct effect of natural selection is elimination of the least adapted, greater adaptation comes in later generations when the genes are recombined. Also, such periods of intense natural selection would decrease the population and leave large areas open for colonization. Those areas would often be changed in their ecosystems. Different survivor populations would adapt by somewhat different genetic changes. When the different populations of survivors meet and mix during the increase of population after the disastrous climate change, hybrids combining the best of many different adaptations would have advantages in colonizing the changed environments that had yet to be (re)colonized. So the variability selection model is indeed capable of explaining the delay from climate change to brain evolution. Though the ongoing colonization would reduce social competition over resources during the time after the climate change, so the hypothesis that it was due to the "social competition that ensued" cannot explain the data.

The incremental advantage of a correct worldview

Sometimes it is claimed that the survival of individuals with false worldviews "prove" that there is no evolutionary use for having a correct worldview. That is a standard claim in the lithania that goes "the brain constructs reality, it does not have to be objective". That claim ignores the evolutionary principle of incremental advantages. In this case, the incremental advantages of a correct worldview. While it is true that an incorrect worldview can sometimes make correct predictions that help survival, a correct worldview can make correct predictions in a wider range of cases. While circular geocentrism could predict some eclipses correctly, elliptical heliocentrism could also predict planetary movements that the former hypothesis could not. Applying a similar incremental advantage model to predictions of where food, water and carnivores were present in the Stone Age, it is obvious that a correct worldview (especially in the case of changes such as climate change or the immigration of new plant and animal species) gives incremental advantages over an incorrect worldview just as much as an eye with a lens gives an incremental advantage over a simple collection of light-sensitive cells.

Red herrings

Social intelligence

As for the claim of big, conscious brains being required to form societies, there are tiny-brained insects that form societies. They have all the evolutionary advantages of organized societies, lacking only innovation and critical thought. That is all it takes to debunk the claim of life without a device for justifying and taking credit would be lonely and/or disorganized.

The claim of big brains being required for stable social relationships and hierarchies in which each individual has its place is a distinctive claim, but debunked by the fact that each individual in paper wasp and Jack Jumper ant colonies has its place. As for deception? Honey bee workers and workers of some rainforest ants with a mutation that makes them fertile usually only lay eggs when they are not watched, and the few that are caught laying eggs are killed. There is also evidence for ants doing rescue behavior (Erik Thomas Frank et al. Saving the injured: Rescue behavior in the termite-hunting ant Megaponera analis. Science Advances 12 April 2017. DOI: 10.1126/sciadv.1602187) along with the fact that many insects help the eggs and larvae of the colony survive. This debunks the claim that large mammalian brains are needed for altruism of the kind that supposedly was the evolutionary advantage of evolving mammalian "empathy" mechanisms in the brain.

Computer simulations show that simple reactions with no intelligence can emulate what is known as "ape politics". Monkeys remain socially functioning in their groups even if their entire Neocortex is surgically removed (though their ability to learn new tool uses and to avoid new poisonous fruits and new predator warning calls is impaired by such surgery). No human hunter-gatherers are anywhere near the 150 strong groups predicted by the Machiavellian intelligence hypothesis, big human brain demand for nutrients that are rare in uncultivated nature imposing feeding restrictions. See also H/T: Machiavellian intelligence. There are also some snakes that are capable of coordinated hunt (Vladimir Dinets. Coordinated Hunting by Cuban Boas. Animal Behavior and Cognition 23 May 2017. DOI: 10.12966/abc.02.02.2017), though their brains are much smaller and consume much less nutrients than the brains of mammals. To hunt in a coordinated way is otherwise said to be a higher case of social cognition.

The fact that long distance geographical dislocation of objects in the Paleolithic is generally associated with Homo sapiens and not with archaic hominins have sometimes been claimed as "evidence" for a derived Homo sapiens ability to form long distance social networks. However, migration of sexually mature females from their group of origin to other groups existed as a strategy to avoid inbreeding in archaic hominins as well. Assuming that the nutrient restriction model is correct and groups became smaller and further apart with bigger brains, it follows that they had to migrate longer distances to get to a group with sufficiently unrelated males. Smaller and more widely spaced groups also makes it less likely for an "intruder" from another group (including but not restricted to outgroup females who are too closely related to the local males to sexually attract them) to actually get caught in another group's territory. This "cross where nobody can see you" effect explains facilitation of long-range migration without the redundant claim of an increased "social intelligence" allowing for more peaceful contacts between groups. The longer geographical dislocation of objects and manufacturing techniques in sapiens can therefore be explained by the luggage and knowledge of migrating females being brought further distances as a result of decreased, not increased, social networking. Other members of the groups to which the objects were brought may then have recycled the material they were made of.

Even if Homo neanderthalensis disappeared when Homo sapiens arrived, it does not prove that it was social cooperation that allowed sapiens to outcompete Neanderthals. There are many possibilities, for example sapiens may have carried diseases that Neanderthals had no immunity to.

Even if some groups were to start a special type of social interaction that somehow selected for bigger brains, the decrease in group size for hunter-gatherers with bigger brains would be inevitable. If social selection for bigger brains were to be independent of group size in some way that allowed brains to become bigger as an adaptation to social interaction even as groups shrank, such as a runaway brain evolution or feedback loop between brain and culture, it would still leave the groups affected by it weaker in manpower than groups unaffected by it. And then competition between groups would, in a Paleolithic context in which warrior count outweighs weaponry, eliminate the big-brained groups and with them the social type of selection for big brains. In addition, of course, to big brains increasing the risk of dying from starvation. Social interaction specialist brains, wired to feel offended by criticism of their beliefs, would be worse than useless for the kind of behavior change that could otherwise have compensated for their nutrient costs by being better at solving problems and finding ways to survive.

Falsifiability of the nutrient case against social intelligence

Is there any discovery that, if made, would falsify the remark that social cohesion would have selected against big brains and not for big brains in human evolution due to nutrient costs? The remark is based on the observation that our ancestors during the evolution of our big brains were hunters and gatherers, with all what that means of restrictions on food availability that caps group size. This means that if any evidence of agriculture two hundred thousand years ago were to be discovered, the nutrient case against complex brains evolved for social cohesion would be falsified. So the case is falsifiable, and thereby scientific.

This is similar to the case when it was claimed that the theory of evolution was not falsifiable, and then it turned out that if a Precambrian rabbit fossil was discovered, the theory of evolution would be falsified. That makes the theory of evolution falsifiable and scientific.

Do not be the first to feel offended by getting the smaller share!

Sometimes psychologists claim that humans are "wired" to refuse offers of getting the smaller share instead of getting nothing, such as getting 10 cents while the other person get 90 cents. Psychology often claims that people prefer that neither gets anything. They claim that such is an innate brain mechanism that evolved to demand equal sharing of big game. They do not explain how the first individual that was "wired" to feel that way could survive as a pack animal at a time when the pack consisted of individuals that gave the smaller share of big game away. The first individual to feel offended by such "inequality" would be severely socially disabled within the context of any social interaction that could have existed at that time. That would be negatively selected and eliminated from the population before it had a chance to select for "generous" or equal splits in others.

If lack of a realistic initial evolution scenario is not bad enough, it gets even more absurd when it is claimed that demanding equal split gave humans a civilization-building edge over apes. It is enough to look at the origins of civilization to see that it was not driven by equality. While high technology makes equality possible, the initial steps of civilization were driven by forced labor and slavery. At such a stage, demanding equality would have weakened civilization. If humans demanding equality and apes accepting inequality was the principal difference in civilization formation ability, apes would have been better than humans at creating civilization from scratch. Planet of the apes for real! It would be possible that humans would get an upper hand when equality became technologically possible and could drive further technological progress, at least industrial technology, but not earlier than that.

Rule the world - what does it mean?

It have been claimed by Yuval Noah Harari who wrote "Sapiens: A Brief History of Humankind" that humans "rule the world" because of an ability to relate to imaginary things such as money and religion and that it makes it possible for humans to organize large societies. He claims that nonhuman animals are objective and that it prevents them from organizing large societies. The claim of animals being objective in a way that humans are not ignores, among other things, that nonhuman animals on hallucinogenic drugs or with overheated brains are known to react to completely nonexistent dangers without outer cues and that they react to the smell of a carnivore even if the carnivore itself is not present. Also, it takes falsification to know whether a hypothesis is correct or not, and therefore in order to know whether something exists objectively or is falsely hypothesized. The claim that simpler brains are objective while more complex brains are non-objective is therefore completely bull.

The claim is also vague about what "ruling the world" means. Humans have not cured all diseases and cannot stop most natural disasters. Oxygenic cyanobacteria have impacted the environment more than humans have. As for the organization of large societies being uniquely human, ants and termites show otherwise. Millions of them in some colonies! While neither termites nor ants have technological progress, it is clearly due to something other than a lack of big societies.

Sexual selection

Could such a nutrient-wasting arrangement have evolved by sexual selection's disability principle? Displays selected by sexual selection consume the most nutrients and generate the most dangers in sexually mature individuals (think moose horns and peacock feathers). It does not generate building of future displays that consume the most nutrients or cause the most risks in small young still fed by their parents (that would have increased the difficulty of feeding the young and decreased their chance of surviving to the fertile portion of their lives more than good genes could compensate for). So such an arrangement is theoretically possible for beings that spend most nutrients in the brain on firing signals across poorly insulated nerves throughout life (such as birds with their compact brains).

But not for beings that spend most nutrients in the brain on building well-insulated nerves in very young individuals, saving nutrients on cheaper neuron firing later in life (such as mammals). Pre-industrial humans before the recent change towards earlier puberty caused by a changed diet and pollution (or maybe limitations of the functional DNA amount meant that to get room for the immune system genes selected by diseases spread during and after the age of exploration, stages of development had to be skipped which resulted in neoteny) had almost all of their brain myelinization completed in sexually immature stages of life. So even apart from the existence of science problem, a justifying "consciousness" cannot have evolved as fitness display in humans (or any mammals, unless they have a brain structure that is very atypical for a mammal).

Just because a characteristic is considered sexually attractive, it does not have to be something non-functional that evolved due to the disability principle of sexual selection. There is no law of nature that precludes functional characteristics that are also favored by natural selection from being considered sexually attractive as well. For example, a distribution of fat on a female that directly decreases the risk of miscarriage may be considered sexually attractive by many males. The fact that physical strength may enhance survival does not make it impossible for many females to find physically strong males sexually attractive. And in the same way, just because studies show that many people consider intelligence sexually attractive does not mean that it has to be a disability selected merely for show-off.

Upright walking and hidden ovulation?

It have been claimed that upright walking in pre-humans brought on anatomical changes that led to ovulation being hidden, so that males could no longer know whether or not a female was in the fertile part of her cycle. It have also been claimed that this led to more socially complex mating behavior, which selected for greater intelligence and bigger brains. However, this claim fails to explain why hominin brains remained small and apelike for millions of years of upright walking before bigger brains evolved.

It also provides no explanation as to why bonobos with their less obvious signs of ovulation compared to their chimpanzee cousins have smaller brains than chimps, not bigger.

Older men were bad hunter-gatherer providers

It is often claimed that husbands being older on average than their wives constitute a panhuman instinctive pattern that evolved in hunter-gatherer times. In this case, psychologists and some anthropologists falsely generalize the monetary economy's pattern of wealthy older men to a hunter-gatherer economy in which physical strength was necessary to obtain the necessities of life. However, if the men were important as providers well after the offspring was born, a significantly older man would likely become weak from old age before the offspring could fend for itself. He would fail as a provider, and those genes would have died out.

Even if an older man was still capable of hunting at the time he mated with a younger, fertile woman, that would be no guarantee that he would remain a capable hunter for long enough to provide for the family throughout the raising of the offspring. There are many older people who are healthy and strong for a while, but not long after they become sick and weak. That is how aging works, Gompertz law of exponential growth of the risk of dying does have an equivalent in exponentially growing risk of bad health! So within the context of families that take many years to raise, the weakness and bad health would come as a bad surprise later and leave the offspring without a providing father. It would not be a case of surviving in spite of a current handicap of the kind that may be sexually selected for. Current survival does not imply survival after a fast health deterioration that often happens in previously healthy older people.

The optimal foraging strategy strawman

These evolutionary facts about negative selection of waste of nutrients do take into account the fact that some functions that consume nutrients can have effects that enhance fitness, and that there are other necessary nutrients besides empty calories. The straw man of "optimal foraging strategies", an obsolete hypothesis that claimed that all animals forage to maximize their chances of finding calories that was falsified by predator avoiding strategies and the fact that there are other nutrients (such as essential vitamins) that cannot be translated to calory count, is not applicable to these criticisms. In fact, this criticism is based on the very facts that nutrient-hungry brains can have functions that may be worthwhile evolutionary depending on how the brain works (allowing for productive but not self-defeating brain types) and that brains require many essential nutrients other than the number of calories that are rarer than empty calories in nature and therefore increase the negative selection of waste of such nutrients.

Critical thought and neuroplasticity - positive or negative link?

One important implication of the fact that justification would be an evolutionarily useless waste of nutrients and that independent critical thought enhances the ability to change the brain, is that the often repeated claim that more plastic brains are more easy to manipulate and/or less capable of critical thought is nonsense. Even if independent thought only reduced neuroplasticity in degree, without eliminating it entirely, it would still be a nutrient-consuming "function" that only decreased another nutrient-consuming function (trial and error learning) in its ability to do its job while not reducing its consumption of nutrients. Any negative link, even if gradual, between independent critical thought and neuroplasticity would also lead to systematic unattainability of the capacity for science. Any attempt to save the claim of such a negative link by saying that "it is gradual, it is not black and white" therefore misses the point.

Any claims of a negative link between plasticity and conscious critical evaluation are thus invalid. This includes the claim combination of "young people are more irrational" with "older people are less plastic". If it was claimed that "nonhuman animals are more plastic than humans" in combination with "nonhuman animals are more irrational", that would be equally invalid. It is the incompatibility at a factor structure level that matters. Precisely, it is the de facto factor structure that counts, which the actual results are. Claiming that "brain plasticity does not decrease as such, but the presence of many learned sub-routines makes the brain harder to change" is not a scrap closer to evolutionary realism than claiming outright that brain plasticity drops to near zero. In evolution, a mechanism for something that does not work is a waste of nutrients, so the former claim is further from being evolutionarily realistic than the latter claim! And claiming that "it still works to some extent" cannot change the fact that it would be a waste of nutrients to "time" independent critical thought to a time in which change de facto works to a lesser extent.

Statistics that appear to show "mature" adults to be "too old to dramatically change their behavior" does not prove a biological obstacle against it. It may be due to cultural assumptions about people over a certain age being "unimprovable", such as H/T: Self-fulfilling prophecies prevents behavioral modification. This does not in any way mean that senile-demented individuals with really degraded brain tissue can properly think and change their behavior, any more than the possibility of independent critical thought at a younger age than considered possible by current mainstream society implies anything about little kids with pre-functional connection between the brain hemispheres. That a hypothesis or theory predicts that some things considered impossible by other worldviews goes, does not mean that said hypothesis or theory predicts that anything goes.

The existence of science requires the existence of individuals capable of science. It does not, however, require all individuals officially classified as human to be capable of science. The emergent threshold of being capable of science is decoupled from any arbitrarily demarcated and collectivistic definitions of species. There is nothing in the requirements for the existence of science that demands the prevalence of the capacity to correspond to any species concept(s).

Other countercorrelates of age, archaeology and gold threads

There are archaeological sites where thousands of years old intertwined gold threads that are very finely crafted have been found. These intertwined threads of gold were made without automation or industrial technology. They were so thin it takes good vision to intertwine them in that way, which is said to be lost forever in the 18-21 age range. However, the manufacturing technique also required extensive knowledge as well as very patient and controlled movements that are claimed by psychology to be impossible for people under 25 years of age. This makes the twinned gold threads physical evidence of individual people combining in themselves abilities that, according to the official account of what people can do at different ages, should be impossible to combine in one person.

Given the importance of combining neuroplasticity with independent critical thought for science, there are analogies between these twinned threads of gold and science itself.

Fundamental pitfalls with evil versus mentally ill debates

Many people say that "evil does not exist, but... " and then add something on the lines of "some dangerous people must be locked in", "something is wrong with people who can do that and that" and so on. That deprives the statement that people are not evil of its substantial content, reducing it to a mere phrase. Being classified as "insane and dangerous" and locked in with the phrase "you are not evil, we do not punish you and we do not hate you" is only a hypocritized version of being locked up and called "evil". Or worse; incarceration and confiscatory guards keeping people from having their stuff on their own exist both in prisons and in closed psychiatric wards, but the latter also have the additional legal abuse of forcible medication and even electric chocks of the brain!

Legal abuse hidden behind politically correct language

There are many cases of people being classified as "mentally ill" at the same time as specific words for so-called "mental illness" are censored and replaced with others. It is claimed that this is out of "respect" for so-called "mentally ill" people, ignoring the fact that there is no respect for what an individual actually wants in the act of explaining his or her requests away as "symptoms" of an "ill" brain. For the individual, that is only never-ending torture and externally-decided "rights" have nothing to do with the person's unique individual needs.

Destroying the brain with medicine

An extreme case of the above is when externally created so-called "rights" include forcible medication with drugs that destroy the connectivity of the brain. As official classifications of "insanity" are arbitrary, individuals classified as such may be true persons capable of intelligent discussion and choice - until their brains are permanently damaged by pills that reduce them to mere beasts. To claim to "normalize mental illness and its treatment" therefore becomes a platitude phrase for mind murdering people with thoughts that are arbitrarily pathologized by society. To silence the remark that brain damaging pills destroy objective personhood by politically correct censorship thereof, so-called "respect" for those that have already been literally dehumanized by drugs destroying their brains, is to silence the realization that people have the right not to be forcibly brain destroyed.

What society is doing can be compared to a Spiritualist version of political correctness in which the remark that dead people are not alive is considered "offensive" to dead people, and that it is used to justify the killing of people by live burial, "if you say that murder is wrong, you are offending the dead".