Main Page

From Falsifiable Scientific Method
Jump to: navigation, search

Is the scientific method falsifiable? The actual scientific method is about falsifiable hypotheses, real science is falsifiable, though pseudoscience is not falsifiable. Falsifiable Scientific Method is for falsifiable science. On Falsifiable Scientific Method Mediawiki, each hypothesis is tested by empirical data. Links to the data are to be added. Science is about falsifiable predictions, not about peer review. Feel free to add reductio ad absurdums to any hypothesis or theory, just create an account and start editing!

This is the Falsifiable Scientific Method.

What is the Falsifiable Scientific Method MediaWiki about?

The Falsifiable Scientific Method MediaWiki is a place where anyone can add hypotheses and theories (H/Ts) that make falsifiable predictions, and get help with finding links to empirical data to test them. You do not need an official scientist's title, it is the method that matters. Any user is free to add links to such empirical data. Really radical Open Science with the potential to increase falsifiability! The H/T articles themselves should make their falsifiable predictions clear, so that the search for or addition of empirical data relevant for their testing is facilitated.

If you find a H/T article that does not explicitly state all the falsifiable predictions that follow from its premise, please feel free to add reductio ad absurdums that you can think of. Even if you do not know a practical way to test them, it is better that the knowledge about the predictions get out so that someone else can think of practical ways to create tests, than to keep potentially but not yet practically testable predictions to yourself. If you do not know whether or not it will ever be practically possible to test a prediction, it is better to write it so that others can think of the possibility and maybe come up with ways to test it.

Doing science is easy!

The scientific method is about taking claims to their logical conclusions (reductio ad absurdums) and test them with evidence. You do not need a title to do that, you only need to be sapient. If you can distinguish the act of showing what conclusions a hypothesis leads to from belief in the claim, you are sapient. If you think "science is difficult and can only be done by people with high IQs", your claim is disproved by the fact that IQ tests are bogus down to their very assumption that speed of problem solving is the limiting factor of intelligence. Doing science is not difficult, you just have to be a person as opposed to a mere hominid or beast.

The Internet is an excellent way to make the information needed for science to progress even more available. It is not hard to do science that requires much information when lots of people can link a hypothesis to relevant empirical data and there are many people who can contribute some data each!

What's the point in having a wiki like this?

Problems in peer review

Science is having serious problems with nonsense passing peer review. This problem is most rampant in the most fragmented journals, which are also the most prestigious. This problem is exactly what the falsificationist model predicts should be the result of strict distinctions between scientific fields, which prevents empirical data collected in one field that are relevant to the falsifiable predictions of a H/T in another field from being used for falsification. Even "traditional" Open Access journals tend to follow the obstructive fragmented model, which this wiki is intended to avoid.

Another serious flaw in traditional peer review, which the Falsifiable Scientific Method MediaWiki with its respect for the method of science has the purpose of avoiding, is the delusion that falsifiable predictions are an internal professional matter in a "discipline". That leads to dealing with only a small fraction of the many falsifiable predictions that follow from a theory or hypothesis, which prevents false hypotheses and theories from being falsified. On the Falsifiable Scientific Method MediaWiki, the methods of science by which each theory and hypothesis makes its unique range of falsifiable predictions are observed, and this means that the predictive conclusions cannot be boxed in academic disciplines.

One problem with fragmented peer review is about the application of skepticism, double standards of what is considered an "extraordinary" claim. The worst cases are those when the same claim is taken for standard in some "fields" and as extraordinary in others. One example is when purely theoretical cosmologists take the existence of dark energy for almost granted at the same time as other models without dark energy are invoked as explanations for the apparently accelerating expansion of the Universe as soon as someone mentions the possibility that dark energy may represent a way to create negative gravity-like effects that can be used or that the (apparent) fact that only space and not time expands show the existence of a force that warps space but not time. Have the principle that science is something that works got lost in the labyrinth of peer reviewed double standards? If a hypothesis or theory is correct, it makes correct predictions. Different hypotheses and theories predict that different things should work, including things that can be practically used if the underlying theories are correct. The only valid way to criticize the claim that something that follows logically from a hypothesis or theory cannot be practically used is to criticize the theory or hypothesis itself. Saying that "it is true but cannot be used", which is what "on the theoretical level it is an extraordinary claim to say that it is not true, but at the level of practical usefulness it is the claim that it is true and useful that is the extraordinary claim" translates into in clear language, is bullshit. Imagine that someone in the 1930s or early 1940s would have said that "e=mc2 is a well supported theory and it is an extraordinary claim to say that it is not correct, but it is also an extraordinary claim to say that it predicts the possibility of chain reactions and can be used to produce nuclear power". That would be equally nonsensical. What it does is prevent falsification under a false veneer of misdirected "skepticism".

It is also possible that the academic system's demands for specialization may be the cause of apparent unwillingness to falsify the hypothesis or theory that one have been working with. The peer reviewed system cause people who have worked with a theory to risk unemployment if the theory is falsified. Instead of assuming "it is in human nature" in a defeatist way that claims the current institutions to be an "imperfect but necessary" mitigator (as if a system created by many beings that shared similar flaws, which institutions created by humans would be if "human nature" was the problem, had any hope at all of mitigating them and avoid building the flaws into the system and its rules and making things much worse), it is time to consider the possibility that the system may be the problem! If people who do science are freed from economic dependence on their theory remaining unfalsified, people who do research will become willing to falsify any theories or hypotheses including their own, this wiki contends.

There are also problems with the risk of institutions making individuals less objective and more biased, even if the institutions are intended to reduce bias.

The openness of scientific discussions

For everyone who knows empirical data that may falsify a hypothesis to be able to contribute, scientific discourse must be open through and through. Any distinctions between "internal discussions" in specialist "fields" and papers written after such discourse are unacceptable. "Open access" to final products of non-open discussions are insufficient, so-called "internal" documents must be open access and full integration in the discussion progress must exist. It is time to tear down the ivory tower altogether, to destroy all publication embargoes even of drafts and ideas that are not considered ready for publication in peer review journals.

It is also important for science to destroy all paywalls.

All sapients are scientists

As a result of open science, all distinctions between "academic discussions" and "public claims" must go away. To be sapient is to be capable of scientific falsification. Any assumptions on the lines of "it would be unethical to express that-or-that in public, but it may be discussed in academia by specialists in the field" are bogus. If "the public" was a herd of beings that responded with so-called "unethical" behavior to hearing certain things being discussed, there would be no way for beings that were essentially the same (such as humans) to be able to "ethically" discuss it within the confines of a specialist "field".

The claim that "specialists behave differently because they follow certain rules" misses the point that the rules were created by humans. A law can never be less bad than its legislator(s), and in the same way beings that are essentially incapable of science cannot do science by following rules written by beings that are also in essence unable to do science. As the flaws of the creators of the rules would be built into the rules themselves and thus enforce, not mitigate, the flaws, the claim that "it is not perfect but it is better than nothing" misses the point.

Creating something new, not false dilemmas between institutions

When it is claimed that peer review is important for science, the examples of pseudoscience outside peer review that are cited refer to institutions that lack in openness, practice immunization to criticism or share other flaws in common with peer review. That Destroy the paywalls instead of replacing them with other paywalls, for example. It is time to stop the false dilemma fallacy between peer review and another institution with analogous flaws, and start something new without those flaws instead. This wiki is a chance to do that.

The relevance for individual hypotheses

On the Falsifiable Scientific Method MediaWiki, only the actual falsifiable predictions made by a H/T determine what empirical data is relevant. Classification of science into "fields" is immaterial. If you know relevant empirical data, please add it. Don't ask the H/T for credentials, just link! It's easy! Even if less than one in 10000 novel hypotheses is correct, missing out on it by generalizing is unacceptable. The wiki provides a way for cheap and easy yet rigorously critical mass-testing of hypotheses, antiquating the claim that there are "too many hypotheses on the Internet to test them all". It is necessary to destroy paywalls.

The irrelevance of professions

One straw man about the testing of all falsifiable predictions that follow from a hypothesis is the false assumption that it would mean that any data could be used for or against any hypothesis or theory. One formulation of the straw man is "should astronomers review papers in biology?". The truth is that field-based generalizations on what is relevant cannot be made, it must be determined from case to case and for each hypothesis. An astronomical discovery may have relevance for a biological hypothesis, in and only in precise ways that follow from each hypothesis. A discovery of an astronomical event that impacted the environment on Earth 3,9 billion years ago can be relevant for the origin of life within the context of a hypothesis that state that life began 3,7 billion years ago but not within the context of a hypothesis saying life began 4 billion years ago.

That academic borders are irrelevant does not mean that there is no way to say what is relevant and what is irrelevant data. Even within the same "field" there are irrelevant data points. Does the classification of both the origin of Ebola viruses and the chemical structure of HIV virus membranes in the same "field" mean that the discovery of mechanical flexing in the membrane of HIV viruses would prove that Ebola originated in Switzerland or disprove that it did originate in West Africa? No, it would not. The realization that it would not shows that it takes something other than professional classification to determine relevance and irrelevance of the data. And there is thus no grounds to assume that the recognition that astronomical discoveries can sometimes be relevant for biological hypotheses would in any way lead to nonsense such as claiming that the discovery that Jupiter never cleared its orbit would disprove the hypothesis or theory that elephants evolved in Africa.

Repeatability, replicability and reproducibility

The number of repeated supports, in the absence of repeated falsifications, gives a gradual scale of testedness. This is similar to reproducibility, with the modification that it measures what have already been reproduced or replicated. Truly radical altmetrics! This gradual scale is used on this wiki instead of the traditional binary classification into untested or poorly tested hypotheses and well-tested theories. When many H/Ts make somewhat different predictions on the outcomes of a single suggested experiment and/or testing of more H/Ts can be added to a suggested experiment with little extra cost, a multitesting experiment can be suggested for reducing the cost per tested H/T. Don't ask how "plausible" a hypothesis is, test as many as possible to reduce the average price per tested hypothesis instead.

That type of multitesting experiments mean that one experiment is created in a way that tests many hypotheses and/or theories at once. For an example, if the cost for testing 10 different quantum gravity hypotheses that make slightly different predictions in the question of how high speeds will influence the non-spherical shapes of highly radioactive atomic nuclei during their decay is 10 million dollars for each hypothesis, in total 100 million dollars if you run them in separate accelerator tests. It is possible that if you have the initial cost of 10 million dollars for testing one hypothesis already paid, tuning the detectors to be able to detect all the signatures may only cost an extra 50000 dollars per added hypothesis. That gives a cost of 10 million + 50000*9 = 10450000 dollars instead of 100000000 dollars. The results can also be used to test more hypotheses that were not thought of at the time, should hypotheses that make predictions for which the data is relevant appear. There are many other possibilities, much lower cost levels can also be reduced to a fraction of the price for separate testing.

Part of this is also this wiki's goal to make observational and experimental data openly available before their relevance for the "intended" hypothesis to test have been assessed. One experiment result can falsify many hypotheses, using maximum falsificative application gives more science for less money. Any postponing of the publication of experimental data until their relevance for one hypothesis has been determined is bad for the potential to use the data to test and possibly disprove other hypotheses.

Precise falsification does not kill hypotheses for being related

This supports falsifiabilism. It is based on the idea that the traditional "criticisms" of the falsifiability criterion confuse different things. For example, claiming that the criterion of falsifiability would "kill hypotheses before they have time to develop into theories" ignores the fact that the real (as opposed to straw man) falsifiability criterium recognizes the differences between models that make distinguishable predictions. Only straw man folly, which has no place on Scientific Method Wiki, would ever claim a hypothesis to be false due to association with a falsified hypothesis or theory that was falsified for predictions that the actual hypothesis or theory does not make. Therefore the criterium of falsifiability does not kill theories before birth.

For example, Albert Einstein claimed that a "cosmological constant" kept the Universe at a constant size by counteracting gravity, neither expanding nor contracting. Later, the expansion of the Universe was discovered in red shift of distant galaxies that were observed by Hubble (Edwin Hubble). Einstein later called the cosmological constant "the greatest blunder of his life", it had kept him from predicting the expansion of the Universe. While the original sense of the cosmological constant was falsified, later observations of distant type 1a supernovae show the expansion of the Universe to accelerate. And then a "cosmological constant" at a different degree, one that increased the expansion more than gravity could decrease it, was suggested as a hypothesis. This is an example of two "related" hypotheses being separated in such a way that falsification of one does not falsify the other, showing that Richard Feynman's assumption that falsification of a hypothesis would "kill" any distinguishable-predictioned models that could be inspired by it was wrong.

Anyone can suggest better ways to test hypotheses

At this wiki, anyone can come up with more efficient methods to test hypotheses or theories. The more people contributing possible solutions, the greater the chance of finding a cheaper and more efficient solution that generate more scientific data and test more hypotheses and theories. Whether or not you can afford realizing the idea or any part of it for money that you have access to does not matter. Ideas for experiment are to be evaluated for their efficiency, not for the wealth or lack thereof of their creators. This gives the most science for the least money in total. The lithania "it is not about having ideas, it is about making them real" has no place as an "argument" against an experiment suggested by a person with little or no money.

This wiki is founded upon the idea that listening to ideas from people who are not famous and do not have much money or status opens up a gold mine of ideas. We think this has potential to be more efficient than wasting time waiting for people with ideas to climb in status and promote them on their own. Let's surpass the status-obsessed system with our openness to ideas!

The difference between dynamic optimism and passive optimism

This wiki's mission is an example of dynamic optimism, the idea that it is possible to make things better. The distinction between dynamic optimism and passive optimism is important. Passive optimism is to think that things get better by themselves, which stops improvement as much as defeatist pessimism does. Neither the belief that science will progress automatically nor the belief that science cannot be rebooted would make this wiki worthwhile!

There are many differences between dynamic optimism and passive optimism. Part of the distinction between passive optimism and dynamic optimism is the realization that the potential of the idea is decoupled from the social status of its creator. The claim that "you cannot save the world because you are not Messiah, therefore your idea cannot save the world" is a fallacious conflation of idea and creator, ignoring the differences between passive optimism and dynamic optimism. It is not the opposite of the messianic fallacy, but a version of it, by assuming the exterior shell of the creator's social status to be determining of what the idea can do.

The fact that ideas must be tested does not contradict the difference between passive and dynamic optimism. More people must express their ideas on platforms that enable efficient mass testing, to find the good ones by critically scrutinizing them all. Such a difference between dynamic and passive optimism shows the importance of active linking of ideas to relevant data for their testing, as well as adding ideas. Just because you do not know whether or not your idea is correct is no reason to keep quiet about it. It may be correct, so express it so it can be tested! And just because you do not know enough relevant data to make a full scrutiny of a hypothesis is no reason to be silent about the relevant data you do know. Link to what you know, and let others link to what they know. Together, we can test all hypotheses! This wiki overtly rejects the idea that articles should be kept non-public until "sufficiently" tested. Instead, it advocates the principle that hypotheses are best tested if they are already public, so that more people can link to empirical data of relevance!

Stop the straw man of assuming that public availability of still poorly tested or even untested hypotheses makes it "impossible" to see what theories are well tested. It does not in any way prevent you from seeing the differences in how well-linked and analyzed in the relevance of the links they are. You can contribute to the testing by checking some of the links, instead of complaining passively about the fact that they were not all checked before publication!

The importance of constructive criticism, that it is important to reject false hypotheses in order to distinguish more useful hypotheses, means that it is extremely dangerous to think of positive thinking and negative thinking as opposites or personality types. Since it is the positive power on the usefulness of hypotheses of negative thinking regarding individual hypotheses that matter, the dangers are not mitigated by saying that "it is a continuum, not black and white". To claim that there is a scale of positive versus negative thought ignores the constructive power of critical thinking.

Using this wiki

To make the articles in this wiki easier to find, the main page has been added to the category "categories". If you click that category at the bottom of the page, you will reach a category page that contains all the categories on this wiki.

View source, edit, log in and accounts help

If there is a "view source" button instead of an "edit" button, you are not logged in. If you log in to your account, you will be able to edit this wiki. Accounts created on other wikis may not work here. If you do not have an account yet, you can click on "create account" to create one.

Accounts are for free

Registering an account on this wiki costs no money and will never cost any money. It is for free. No matter how long you keep your account and no matter how much you edit, you will never be charged any money for it. Ever. That is a constant that will never change on this wiki.

How to make links to empirical data

Please place the links to empirical data at the bottom of the H/T article to which they are added. Not as references placed in the running text, but as separate links at the bottom of the article. They are, after all, for testing the falsifiable predictions that follow logically from the H/T as a whole, not for verifying individual parts of the (H/T) article. In the case of external articles that can be copied without violating any de facto enforceable copyright laws, they can theoretically be copy-pasted to create empirical articles on the wiki itself. However, as long as search engines such as Google penalize duplicated content and this wiki depends on them, this is not recommended. Links to data that falsifies a H/T, if present, should be written in bold. Links to data that support the H/T should be written in italics. If you know some data that may be relevant but do not know if they support or falsify the H/T, please feel free to add them with the links written in ordinary text. Someone else may read them carefully and change the text style of their links accordingly. Or you may do so at a later time, if you want to. In all of these cases, the same link text type guidelines apply to internal links to empirical articles as to external links. In the context of multitesting suggestions, please write links to experiment suggestion pages in underlined text.

See H/Ts and observational signatures, these articles are in their respective categories and are therefore practical for finding articles to read and contribute to as well as the fact that they describe what the categories are all about.

Adding false H/Ts

Please feel free to add H/Ts that you know strong evidence against, as long as you either link to (or at least ask others to link to described) data that debunks them. Rejecting false H/Ts is important for scientific progress, and a large part of this wiki's mission is to allow falsifications that would never happen in strictly fragmented peer review. In the case of asking for links, please add a note at the top of the article saying that the H/T is probably false but that links are needed.

Do not assume belief in H/T

Due to the importance of people feeling free to add H/Ts that they know or strongly suspect are wrong, it is also important not to assume that people who add H/Ts believe in them.

No assumption of motivation

Assuming that someone who adds reductio ad absurdums and/or data that contradicts a claim or otherwise criticizes a claim "has an agenda" against that claim is never permitted. No matter what the claim is, no matter how scientifically well-supported it is, no matter its relation to any laws or cultural taboos, assuming psychological motifs or agendas behind contradiction of a claim is never allowed. Demanding replication of the data, questioning its validity and so on is okay, as such criticism is not immunization to criticism. To assume motifs is, by contrast, infalsifiable. Freedom of expression is necessary for science and it means the right to apply premises consistently without facing any assumptions about motivation for conclusions no matter how controversial. See assuming motifs as a totalitarian infalsifiability. Do not assume motivation!

All claims on the lines of "it is hard/it is impossible to see any other reason for expressing view <something> than agenda <something>" are banned on this wiki. This applies no matter in what "direction" it is used. It is common today for both sides in a "debate" (or "non-debate" in which one side has much more followers and say that the other side somehow "does not exist" even though they also contradictorily say that they are a problem), the ban against "view that-or-that is an expression of agenda that-or-that" applies in all directions. If someone claims that the view that GMO is safe is "only promoted by big companies that ignore people's health" and someone else claims that the view that GMO is risky is "promoted exclusively by groups who want to starve poor people to death", they are both in violation of the policy against assuming motivation. If both sides in a debate (or "non-debate") accuse you of "defending" their opponents, you know that you can think on your own. That is good!

To use reductio ad absurdums showing that conclusions follow from a premise is allowed, even if the conclusions may be considered "evil". Reductio ad absurdums are necessary for science. However, the distinction between what follows from an axiom, versus associations that are not axiomatic (such as polls or alleged statistical correlations between views) is all important. Non-axiomatic allegations of agendas are banned as arguments for or against views. Even in the case of axiomatic reductio ad absurdums, however, it is not permissible to claim that any particular conclusion is "the motivation" for taking an axiom view. Such claims of rationalization are banned.

Don't allege irrationality to make the opposite look rational

There are serious problems with "scientists" being forced by a publish or perish culture and economical grants to use various tactics to make the papers that they wrote look rational. One common strategy is to claim to have an emotional irrational bias to believe in the opposite of what one wrote in the paper(s) to falsely pass one's article off as a "rational victory over one's human nature". As long as the bias you claim to have is a desire to believe in (or to promote) the opposite of what you published or have submitted for publication, the institutional culture of claiming that peer review "mitigates bias caused by human nature" favors those who claim to have biases that they do not have.

To stop this, the Falsifiable Scientific Method wiki does not value anything based on what "bias" the author or authors are said to have. A thirsty caveman that wishfully thought that there was water where there were no water, would have gone where he or she thought that there would be water, found none and died from dehydration. If the thirsty caveman had a realistic view of where water could be found, he or she would stand a chance to find water and survive. So much for the claim that evolution hardwired humans for wishful thinking.

Correlation does not equal causation

Claims of the type that include "that idea can be used for unethical purposes", "polls show that idea to statistically correlate with unethical opinions and/or behavior" or "when someone talks about <insert something> it leads to/tends to lead to others thinking of <insert something (possibly) negative> are to be kept off this Media Wiki. Reasoning is for leading to conclusions, not for justifying preconceptions. If you do not change opinions as facts come along, you can just save nutrients in your brain by sticking to your opinions without bothering to justify anything. In evolution, wasting nutrients on justifying fixed opinions would be strongly selected against. No evolution of fact resistance! And if you assume that people who make a certain claim do not listen to facts (or are fact resistant), you are contradicting your claim that it is meaningful to debunk the claim that you are debunking. Conclusions do not have to be like the ideas that polls claim them to correlate with either; it may be that society pushes critics into straw men. The claim that it would be "ethical" to work for worldviews that statistically appear to correlate with "good" opinions and against claims that appear statistically to be "linked" to "bad" opinions is a flawed approach. It takes scrutiny of opinions to make improvement, as opposed to merely replacing one arbitrary whim with another equally arbitrary fad. Also, pushing people into straw man groups by ostracism may cause such straw man groups to grow. Work against strawmanization instead of against the arguments themselves. And if you claim that you can think of something without being unethical and yet claim that people would think unethical thoughts if you spoke about it "in public", you are effectively assuming that you are elitistically superior to "the public".

This applies no matter if it is valid criticisms or false claims that are assumed to be "motivated" by a particular "agenda". For example, remarking that the effect of a mutation is decoupled from whether the mutation was random or intentionally created is a valid criticism of the official definition of GMO, it is wrong to push people into commercial groups for expressing such criticism. However, the principle of non-assumption is not restricted to valid criticisms, but applies to nonsense as well. For example, there is clear empirical evidence for the numbers of victims of the Holocaust, including about 6 million Jews, so claims of much lower numbers are clearly nonsense in the empirical sense. Still, assuming that people who say something else "must" have an anti-Semitic agenda may force them into anti-Semitic organizations, laws against holocaust denial may lead to an increase in the number of members in neo-Fascist organizations. This has nothing to do with accepting nonsense or claiming that "everyone is entitled to their opinion" and fake news. Instead, it is about the importance of presenting the facts as a remedy against nonsense.

Imagine, as a thought experiment, that political parties used a roulette to randomly pick what they considered "ethical" theories and what they called "unethical denial". Even with no initial correlation and no "psychological link" in the brain between the party's ethics and the theories that it accepted, the party would start excluding people with the "wrong" theories or "denialism". Some people leaving the party, and other members keeping quiet about their theories, would create a statistical correlation. And then the flawed methodology of psychology would claim it as "evidence" that the randomized theories were constructed as justifications for unrelated party ethics. This wiki is founded on the principle of making no assumptions of agendas behind theories or "denial", of transcending self-fulfilling prophecies. Socialists are to feel free to say that global warming is not so severe, and liberals and conservatives are to feel free to say that global warming is rapidly causing the end of the world. Remember, however, that all claims will be scrutinized with empirical data.

Criticizing assumptions of motivation

As criticizing a hypothesis is not the same thing as saying that a particular other hypothesis is true, the act of showing that an opinion cannot have a particular motif behind it is not the same thing as saying that a particular opinion is motivated by a particular agenda. For example, remarks of the type that include "if the mind was in a spirit brain damage would not destroy or diminish one's personality, so opposition to psychiatric drugs that may damage the brain cannot be motivated by Spiritualism" and "migration is easier than changing laws and different countries have different ages of consent, so opposition to age restrictions cannot be motivated by pedophilia" are just remarks that a particular agenda cannot be the cause of an opinion. So in themselves, the statements do not violate the policy of not assuming motivation. However, if either of the statements or any of a number of other statements are combined with assumptions that there must be a particular agenda, including but not restricted to claims that they "must" be motivated by trolling, they cross the line into assuming a particular agenda and thus violate the principle against assuming motivation. It is the line between criticizing an assumption and making an assumption that matters, what assumption the criticism attacks is irrelevant.

The possibility that some people who are somehow undaunted by assumptions of motifs can do criticism does not mean that the act of demonizing critics with assumptions of agendas can be scientifically acceptable, see falsis articles for an explanation of that distinction.

Others focusing debate on more controversial conclusions

Even if a person recurrently makes extremely controversial statements on a particular issue, that is not acceptable grounds for assuming that the person has a particular agenda about that issue. It is, for example, possible that the person is simply applying a fundamental premise consistently and that what may look like a focus on a particular extremely controversial view is simply due to the fact that it is the conclusion of the premise that meets the most extreme controversy and hatred and is therefore discussed the most. That is, apparent focus on the most controversial conclusions is caused by external selective attacks on those particular conclusions, not by the person considering those conclusions any more important than other conclusions from the same premise.

Freedom of thought means the right to be consistent. If you are anyhow demanding that people with one philosophical premise "must" juxtapose in conclusions from a different philosophical principle, you are messing with the right of consistency and thus with the for science all-important freedom of expression. The wiki stands firm and consistent by the war on psychological assumptions, the eradication of all fear of being pathologized, demonized or considered "suspect" for applying a philosophical principle consistently!

If a person have came up with a premise that leads to new and efficient criticism of an entire cluster of claims, that person must be free to express all those criticism without fear of "psychoanalysis", without any fear of speculations about alleged "motifs" for expressing so much criticism on a particular "topic". Forcing a person to "choose the most important" criticism leads to many criticisms being silenced, which is always bad. If there is a flaw to criticize, then criticize it! If there are many flaws to criticize, then criticize them all! Never assume an "agenda"! The more critical thought, the better.

Apparently innocent allegations of ulterior motives

Since it is possible for anything to be considered evil (including that psychologists may randomly start claiming that anything is associated with something evil, alleging that people who criticize that claim are "defending" something evil, and then create a culture that claims that the initially neutral-considered behavior is evil in itself by (alleged) association with something already considered evil), the ban on assumptions of ulterior motivations is not restricted to allegations of motives that are often considered evil today. It also extends to apparently neutral motives. For example, it is forbidden to make claims on the lines of "if you think houses painted green does not hurt <insert something>, you have an ulterior agenda of painting houses green", it is also forbidden to make claims of apparently randomly matched motives such as "if you criticize the practice of painting houses red, you have an agenda to ban barbecue", as it is possible to create allegations of anything being psychologically linked to anything.

Examples of absurd allegations of conflict of interest

In some cases, "conflict of interest" is alleged on shaky grounds that ignore important differences.

Pirahã and the hunt for recursion

One example is the allegation that the lack of a single example of recursive grammar in the Pirahã language recordings by Daniel Everett and a missionary before him was an example of Daniel Everett's conflict of interest in proving that the Pirahã did not have recursion. That allegation, however, ignores that the older recordings were not made by Daniel Everett, but by someone else and before the controversy of "universal grammar" began. The older recordings therefore cannot have been the result of the recorder systematically omitting recursive sentences.

Linking: empirical data over interpretation

The H/Ts on this wiki should be described on the wiki itself, external links being for finding and verifying empirical data. If an external link leads to an article that contains both empirical data and hypothetical/theoretical interpretations, it is the empirical content that determines whether the link is supportive or falsificative of a H/T. If a linked external article contains empirical data that does not support the external article's interpretation, that would, according to the claim that thinking is about justification which is not okay here but can be addressed by reductio ad absurdum to counter its objections, mean a virtual guarantee that the empirical data in the external article was not hoaxed to support its interpretation. Even without that allegation, there is no reason to consider the empirical data less reliable. Even if the theoretical interpretations are crap, the empirical data is still useful, just as Tycho Brahe's geocentrism did not stop his observations from debunking geocentrism. If the empirical data in an external article supports a H/T, a link to that article supports the H/T even if the theoretical interpretations in the external article are against it. And in the case of external articles with empirical data that falsifies a H/T, the link to that external article is a falsification link even if the exterior article is written as nominally supporting said theory or hypothesis. Falsifiable Scientific Method is not Wikipedia. Original research is good!

Overall, the wiki supports the idea that hypotheses and theories should have their own articles, and be digitally linked to empirical data that are relevant to them. This wiki considers the peer review approach to write "papers" that lump hypothesis/theory with empirical data in a static article that does not change after publication to be an outdated holdover from the paper journal era, an antiquated method that have no place in the era of the Internet.

Specific measurement standards

In cases when an exact number is given, the measurement standards should be clear. For example, when percentage of a chemical substance in a chemical mix is given, it is specified whether it is volume percent or weight percent. I would not trust a pharmacist who equivocated these two types of percentages! Any confusion of different percentages can be cleared out by specifying what type of percentage it is. Many factoids spread by ignoring this rigor. For example, when it is claimed that a certain percentage of human communication is in body language (one source say 55%, another say 90%), the lack of clear specification of how the amount of information is measured to compare different forms of communication places it in the realm of pseudoscience. There are many different measures of information: the information content in a video recording of the communication can give one number, a reduction of the video recording that blanks the background and represents the communicators as computer-simulated figures to focus on behavior and not appearance can give another percentage, the amount of information needed to define the content of meaning can give a third percentage.

When an exact number is not given, it may sometimes be acceptable not to specify a measurement unit. Some generalization laws are in this category. For example, "if the risk of dying always doubles in a certain finite amount of time from a non-zero initial risk, then eventually death will be inevitable" is valid. An atom with a half-life does not change its risk of decaying over time, but that is a difference that places it outside the predicted scope of the former example anyway so it does not contradict it. Equivocating half-life with average survival time of an atom would be flawed, however.

Other how to articles

Reductio ad absurdum

Splitting a H/T page

Demarcation articles


Observational signatures

Can a diagnosis be a falsifiable H/T?