Nick Hudson, whom I interviewed in May for TCW on Covid and freedom, is a man who seeks explanations.
In his most recent talk, Origins and Trajectories of the Covid Phenomenon, he addresses perhaps the most important, indeed fundamental, question, deserving of one, of all: What are the roots of the widespread logic failures that he sees as responsible for global Covid policies?
What, quite simply, happened to our capacity for critical thought, reasoning and scientific method? For whatever the criminality or conspiracies that lie behind the global Covid response, whatever the selfish or power-hungry elite interests driving it, they could not have flourished without a fundamental, indeed revolutionary, knowledge shift.
In the first of this three-part series of articles, Nick describes the Western scientific model of logical reasoning and hypothesis testing, responsible for Western development, that has now been rejected and superseded. In the subsequent two parts, he homes in on the ideologies to blame for this – the three Ms. That is, Marxism, Malthusianism, and (Post) Modernism.
TWO years ago (during April 2020), we started PANDA in response to the emergent social, political and economic threat of lockdowns in South Africa.
By October 2020, it was obvious that, at least here in South Africa, we were dealing with a situation where local decision-making had become irrelevant. Local authorities were just rolling out things at the behest of unknown and undefined external stakeholders.
Because we were early in realising what was going on, at a time when very few independent groups had formed to analyse and interpret the data stream concerning the Covid phenomenon, a really rapid process of internationalisation occurred within PANDA.
We quickly developed a well-staffed scientific advisory board, and were in full operation by the end of 2020. By that time, the organisation included representatives from more than 30 countries, and we were well stocked with scientists of various flavours.
From the beginning, we emphasised the importance of grounding the entire project in a rigorous application of epistemology, the theory of knowledge. That’s a principle that has served me well over the years, whether I’m talking about something scientific, philosophical, or commercial.
Sound epistemology is always a good place to start. For obvious reasons, it is important to define and understand how do you know what you know, what constitutes knowledge, and what constitutes something else?
To this end, I will start this chapter by laying out some fundamental language and terminology so that we all have a common set of words and ideas that we can build from.
After that, I will turn to looking at the ‘other side’ in this struggle; to examining the thinking and behaviour of those responsible for developing, approving and promoting the approved narrative.
In particular, I will focus on the structure of what they’re saying, both in terms of their propaganda and its elements, the three major cognitive errors that feature in their thinking, and how those logic errors filter through into the narrative that we’ve received about Covid.
I will then turn to examining how their errors relate back to cognitive failures in epistemic grounding concerning the theory of knowledge – the errors in thought and comprehension which underlie the subsequent cascading failures of public health policies.
Then I’ll briefly discuss where that leads us to, and what it suggests about what we should do in response to the failures in thought, decision-making, and public policy. Finally, I will address the ‘why’ question that everybody keeps cycling back to.
Let’s begin by examining the epistemological grounding (or lack of grounding) which has caused the widespread logic failures responsible for global Covid policies.
To provide context, before the advent of the modern approach to understanding science and explanatory knowledge, there was a shared belief that there are two general ways to develop knowledge; by application of a combination of deductive or inductive reasoning.
Deductive reasoning begins with a premise that is proven through observations, while inductive reasoning extracts a likely (but not certain) premise from specific and limited observations.
You saw something that was true – observed one fact or another – and you applied both prior knowledge and internalised philosophical framework(s) to work out what that observation implied about the world.
In this view, all knowledge is deductive, flowing from some axiomatic, reproducible facts. This perspective leads to the conclusion that there is a finite size to knowledge. You would just have to work out all the deductions and then you’d know everything there was to know.
Closely related to that is the idea of induction. The sun came up every day in the past, the sun came up today, the sun always comes up – and then you know something.
Dr Sylvia Wassertheil-Smoller, a researcher and professor emerita at Albert Einstein College of Medicine in New York, observes that the scientific method uses deduction to test hypotheses and theories, which predict certain outcomes if they are correct.
She summarises the process in this way: ‘In inductive inference, we go from the specific to the general. We make many observations, discern a pattern, make a generalisation, and infer an explanation or a theory.
‘In science, there is a constant interplay between inductive inference (based on observations) and deductive inference (based on theory), until we get closer and closer to the “truth”, which we can only approach but not ascertain with complete certainty.’
This type of reasoning – the notion that good explanations would be verifiable by way of deduction or induction – is formally known as ‘logical empiricism’.
More recent philosophers have come to understand that knowledge grows not by deduction, but by the creative act of generating new explanations – conjectures that provide an account for some aspect of reality – which are then put to the test not by an attempt to verify them by way of deduction, but by an attempt to falsify them.
Thus ‘explanatory knowledge’ evolves in a constant cycle of conjecture and criticism, or conjecture and refutation.
If someone produces a fact that contradicts an explanation, we dismiss the explanation, and then we are off in search of a better one that is not at odds with reality.
Explanation is supposed to enhance our understanding of the phenomenon explained, and explanatory understanding must be an essential component of explanatory knowledge. Thus, a theory of explanation must say something informative about what understanding is – what it consists in and what separates genuine understanding from understanding that is merely illusory.
Under this theory of knowledge – this epistemology – every explanation is destined ultimately to be replaced by a better one.
An exquisite example illustrating this concept is found in the history of Newtonian mechanics. Before Einstein’s theories concerning relativity, everybody was absolutely convinced we’d solved the problem of how physical bodies related to each other on a macro scale and a micro scale, until Einstein came along with his amazing conjectures about relativity and blew the thing to smithereens.
And it required just one very intelligent falsifying experiment to work out whether Einstein was on to something. Therefore, a pivotal experiment – one which nobody would have thought to perform had it not been for Einstein’s creative conjecture – was designed that would definitively falsify one of the two frameworks.
That definitive experiment was performed after many years of preparation, and ended up demonstrating a falsehood present within Newtonian mechanics which was not present within Einstein’s explanations.
Of course, Newtonian mechanics is still used, because the answers provided by that system are locally accurate. So as an explanation it’s still useful, but it has been proven wrong in some situations, and therefore has been replaced by a better one – the explanations offered by Einstein.
And in turn, Einstein’s explanations are destined to be replaced by better ones at some future date, as many scientists are trying to do with the theory of everything.
The theory of explanatory knowledge goes like this; all knowledge growth, all knowledge generation, is fundamentally evolutionary in nature, and therefore deduction and induction are irrelevant.
When operating within the paradigm of explanatory knowledge, what we do is employ a process of creativity or innovation. We invent a new explanation, and that explanation is then tested.
There is an obvious analogy to this in biology. In the case of biology, the analog of an innovative explanation is the innovative mutation, or (more commonly) an innovative sexual recombination of genes.
That new genome is then tested in the real world by the process of what is often simplified as survival of the fittest. The new gene is a conjecture, and the real-world test is its criticism or refutation.
In this way, knowledge is incorporated into downstream genomes in a process of evolution, just as knowledge is incorporated into our explanations in a process involving incremental evolution of explanatory knowledge. This is indeed generalisable to all knowledge.
Most vexing problems exist within domains of some complexity, and that complexity challenges us. It defies any kind of deductive analytical solution. We test explanations on the margins of that complexity. An explanation either succeeds or fails in the real world through the process of conjecture and criticism.
In this way, the corpus of knowledge – the canon – consisting of these explanations requires conjecture and criticism to grow in this evolutionary way. And we, over time, replace bad explanations with better ones in an infinitely unbounded process.
So that’s the epistemological grounding for how PANDA has approached the Covid crisis.
I have spent some time on developing this background and explanation because it embeds a couple of key ideas. First of all, you see straight away that any attempt to kill the process of error correction whatsoever will terminate the knowledge growth. You need the process of criticism of explanations in order for knowledge to continue growing.
Once you understand the framework of explanatory knowledge, it is not hard to understand that knowledge-killing activities – destruction of the mechanisms of error correction when criticism is prevented – is directly related to the tendency of centralised power, or any kind of authoritarian perspective, to seek to control information, thought and free speech.
Authoritarians stop certain types of speech that are critical of certain views. That then leads to a situation of stasis, where there’s one view (or model) of the world, and very few mechanisms to allow that world view to improve.
There’s no criticism allowed, because criticism is seen as a threatening challenge to authority, and therefore none is allowed. And that is what we were faced very materially throughout the whole Covid saga – ‘trust the experts’, ‘follow the science’, ‘conform with community standards’.
Preceding and particularly during the Covid crisis, mis, dis and most specifically mal-information have become labels applied to perspectives that contradict authorities, but do not necessarily contradict objective truth (reality).
In Part 2 tomorrow, Nick examines the shaky foundations of the globalist agenda.