#### Steven Avery

##### Administrator

===============================sister threads

the Critical Text (Westcott-Hort recension) statistical charade of false %s began with Hort and continues today

https://www.purebibleforum.com/index.php/threads/a.973/post-2149

statistical illiteracy in textual scholarship - Daniel Wallace struggles with numbers

https://www.purebibleforum.com/index.php/threads/a.294

statistical illiteracy in textual scholarship - Norman Geisler struggles with numbers

http://www.purebibleforum.com/showt...rlatans-used-to-support-Bible-text-confusions

[/COLOR]

And from the Homestead Heritage section.

statisitculation - writing to obscure and divert from the real issues

https://www.purebibleforum.com/index.php/threads/a.372

**Statistical Illiteracy in Textual Scholarship - Daniel Wallace struggles with numbers**

This review was posted on a number of forums, and is an ongoing study. The text below starts with the version from:

[TC-Alternate] statistical illiteracy in textual scholarship - Daniel Wallace struggles with numbers

Steven Avery - July 28, 2015

https://groups.yahoo.com/neo/groups/TC-Alternate-list/conversations/messages/5978

A similar version was posted on the textualcriticism forum, where Daniel Wallace participates. In this case, I did not make any special attempts to contact Wallace, since the errors in this paper go back decades, and I was aware that he had been informed in writing about corrections being needed in writings by a textual scholar, and nothing had been changed. And I did make sure the information was posted where he reads, and where his student Bill Brown reads, so that any counterpoint trying to defend the statistical illiteracy could come forth.

These blunders are not minor numerical concerns, they show that Wallace is simply statistically illiterate (whatever you think of his overall position for the Critical Text, and his attempt to have his own English version, which include his notes that follow Metzger, or the various errors he puts forth in attacking the AV or his various grammatical forays.)

**================================**

The paper being discussed is from the:

Journal Bibliotheca Sacra -

*BibSac*148 (1991) 150-169.

The Majority Text and the Original Text: Are They Identical?

https://bible.org/article/majority-text-and-original-text-are-they-identical (2004 online)

A Journal put out by Dallas Theological Seminary.

And then in June 2004 this paper was posted on the internet. Thus it has had almost 25 years of readership availability.

======================================

Please note the math in this article. Daniel Wallace was responding to Wilbur Pickering, however the issues back-and-forth are not the emphasis here. This section was recently quoted on Facebook, by a respected scientist and creationary scholar, as part of an attack on the TR-AV and Majority positions. And the Daniel Wallace attacks on the TR and Byz texts, utilizing the false conclusions, are frequently quoted online.

The numbers that were given here, 98% and 99%, were a major part of the argument against the significance of the Received Text and Byzantine/Majority text positions.

======================================

**PICK A NUMBER OUT OF A HAT - (Unrelated Numbers)**

======================================The Majority Text and the Original Text: Are They Identical? (1991)

Daniel Wallace

https://bible.org/article/majority-text-and-original-text-are-they-identical (2014 online)

There are approximately 300,000 textual variants among New Testament manuscripts. The Majority Text differs from the Textus Receptus in almost 2,000 places. So the agreement is better than 99 percent.

How different is the Majority Text from the United Bible Societies’ Greek New Testament or the Nestle-Aland text? Do they agree only 30 percent of the time? Do they agree perhaps as much as 50 percent of the time? This can be measured, in a general sort of way. There are approximately 300,000 textual variants among New Testament manuscripts. The Majority Text differs from the Textus Receptus in almost 2,000 places. So the agreement is better than 99 percent. But the Majority Text differs from the modern critical text in only about 6,500 places. In other words the two texts agree almost 98 percent of the time. **

** "Actually this number is a bit high, because there can be several variants for one particular textual problem, but only one of these could show up in a rival printed text. Nevertheless the point is not disturbed. If the percentages for the critical text are lowered, those for the Textus Receptus must also be correspondingly lowered."

SA

The 2,000 (MT-TR) and 6,500 (MT-CT) figures are understandable, there are always nuances in counting. Also, ideally, there should also be a weighing of variants, since a 12 verse variant (Mark ending, Pericope Adulterae) or a critical one-word variant (1 Timothy 3:16) can be greater in significance than 100 and more relatively minor variants.

By contrast, the 300,000 figure comes from a totally different realm, and has no relation to the 2,000 or the 6,500. (The high number shows that if you have 5,000 Greek New Testaments mss, some of them wild and rife with error, you can find lots of variants. In fact, one text can supply thousands.)

======================================

**THE BOGUS CALCS**

Byz-CT 98% -- Daniel Wallace took 6,500 variants, divided by 300,000 (the irrelevant divisor) the 6,500 represents a bit over 2% (2.167%). Thus 100% - 2% (rounded off) = the Wallace

**98% affinity**

Byz-TR 99%-- Daniel Wallace took 2,000 variants, divided by 300,000 (the irrelevant divisor) the 2,000 represents 2/3 of 1%. Thus 100%-2/3% = 99 1/3% --(rounded off)> the Wallace

Byz-TR 99%

**99% affinity**

=======================================

=======================================

Separately, a real calculation, using words in the text, was given by Professor Maurice Robinson, approximately:

Byz-CT 93.5%

Byz-TR 98%

This calculation did not make any adjustments for significance and had one particular word-based methodology. Note that if Daniel Wallace used the more accurate 93.5% figure, which has a methodology, he would have to abandon his major conclusion:

"the majority text and modern critical texts are very much alike, in both quality and quantity" - Daniel Wallace false claim

Note, I am not endorsing such numbers as the better ones from Maurice Robinson, I actually have reservations about all such 2-text comparison pcts. However, I am just saying that they at least are built on a explainable and reasonably sound statistical base.

======================================

**UNSOUND METHODOLOGY --> UNSOUND CONCLUSIONS**

The problem we have here is that this is not fuzzy math, this is full-blown bogus math. The

**methodology used is totally false**.

The number of textual variants calculated globally (i.e. all Greek mss), in this case 300,000, the divisor, is an

**unrelated number**to the affinity between any two specific texts. Please think it through. Two texts do not get closer together if the total variant count is seen to be 1,000,000 instead of 300,000. They do not get farther apart if the total variant account is seen to be 50,000 or 20,000. And the number is not only unrelated, it is quite flexible in wiggle room, e.g. the same calculations could be done with translatable, significant or printed variants specified, e.g. a number of 20,000. One unrelated number is as good as another. However, then the results would not pan out, for the Wallace apologetic.

Thus, if you plugged in 20,000 as the divisor (a calculation, perhaps, of total printed or significant variants, just as "legitimate" a comparison number as the 300,000) your affinity number for the two texts, Byz/Maj-CT, would be close to 67% affinity instead of 98%. Yet the two texts being compared have not changed in even one letter. And then this bogus conclusion in the paper simply would not be possible:

"Not only that, but the vast majority of these differences are so minor that they neither show up in translation nor affect exegesis. Consequently the majority text and modern critical texts are very much alike, in both quality and quantity"

Daniel Wallace,

*ibid*

This conclusion has other "facts on the ground" difficulties.

**It is simply not true that the vast majority of the 6,500 Byz/Maj-CT differences are not translatable**.There are thousands of translatable variants given in an apparatus.

*Mayb*e you could contend 1/2, which is well short of a vast majority. So you have GIGO, with a false "very much alike.." conclusion. This bogus conclusion was keyed off the statistically false 98% number, essentially a plug-in by choosing the unrelated 300,000 number as the divisor.

And note, this statistical problem in the paper should be easily recognized by the smell test.

**6,500 variants in 8,000 verses**can have various measurements of affinity (see below) .. coming up to 98% is extremely unlikely, with any sensible measure. About forty-five full verses omitted in the CT that are in the Byz (a few more in the TR) and thousands of significant variants. How could it be 98%?

(As mentioned, the number could possibly be calculated to around 93%, rather than 98%. In the NT 7% of the text is equivalent to 560 verses. I don't think anyone would say that this is "very much alike".)

There is also nothing complicated in realizing that the math does not fit. Anybody who read and understood the classic

**Darrell Huff book**should be able to find the problem in a couple of minutes. And recognizing the problem here does not require any special skills or training.

*How to Lie With Statistics*Incidentally, we do not know if Daniel Wallace played with these numbers with the purpose to deceive about the texts (hopefully not) .. or if he is simply statistically illiterate. Perhaps Daniel Wallace did not think about it and put forth the statistics in the paper as a sort of

*hopeful monster*attempt. His footnote indicates that he had some second thoughts, yet he never realized that his divisor is totally improper. An ad hoc, fallacious,special pleading choice. An unrelated number that does not have anything to do with the textual affinity that was claimed to be measured.

========================================

**What can be done?**

If a very simple statistical calculation is totally wrong in textual science,

**and is not noticed by the writer, his reviewers, peers and students, for years, for decades**... what does that say about the science?

And clearly, more complex graphs and more sophisticated presentations can also be similarly worthless?

SIDENOTE ON GRAPHS:

An example: articles on the topic of manuscripts through the centuries by Daniel Wallace and James White have used graphs that are similarly using false methodologies. There is one in the article above. The purpose: to present a "revisionist history" (Maurice Robinson's phrase). And Professor Robinson has accurately pointed out how the language and use of the graph has not been scholarly.) Wilbur Pickering also challenged the graph, however not rigorously. Earlier, on another forum I placed a series of posts on the flawed graph. The impression was given that the Byzantine text and its variants only entered the Greek text late. Sort of a back door method to keep the Syrian (or Lucian) recensions alive. To give an impression of the 400 to 900 AD period that is against all accepted textual history. And to give the impression that the early centuries were massively Alexandrian. (E.g. 100+ localized papyri fragments, technically each one a mss, from gnostic-influenced Egypt, totaling a couple of NT books, is capable of skewing any statistical calculation that is based only on numbers of mss. And this is one of many problems. Some graphs do not even have an X or Y axis description, one of the tricks pointed out by Huff. )

However, the graph is not the issue here. It has its own study.

We can see in textual science that the goal of

*agitprop*against a text like the Reformation Bible (TR) or the Byz competitor text can outweigh scholarly study. This started with Hort ("vile" and "villainous" describing the TR, even before he began) and the beat goes on. And the math, statistical and graphic presentations can be totally unsound and unreliable, and nobody will notice. Or if they notice, the textual emperor will not be informed that there is a wardrobe malfunction.

Note: statistics can be manipulated on all sides, however papers that are published are supposed to go over a high bar of correctness and examination. If a Byz or TR-AV supporter, or an eclectic, makes a similar blunder, it should be quickly caught and corrected.

Maybe SBL and ETS should have seminars teaching about the basics of statistical manipulation. And should reviewers of papers be vetted for elementary statistical competence? What do we say about students educated today in such a statistically illiterate environment?

**My concern here is not just Daniel Wallace.**It is also what this says about a type of

**scholastic and statistical dullness in the textual studies realm**as a whole. This should not have lasted in a paper one week without correction, much less almost 25 years, and standing.

========================================

Similarly, the problem is not only statistics. One can look at the recent 2008 paper by Van Alan Herd,

*The Theology of Sir Isaac Newton*, which was successful as a PhD dissertation, and see elementary blunders that passed review at the University of Oklahoma. Here is one of many examples:

*The Theology of Sir Isaac Newton*(2008)

Van Alan Herd

https://books.google.com/books?id=nAYbLOKKq2EC&pg=PA97

http://gradworks.umi.com/3304232.pdf

The error here, according to Newton, is assuming the word "God" as the antecedent to the Greek pronoun, ("who"), as the King James translators had assumed it and replaced the pronoun with the noun, "God" in the Authorized (KJV) version. Newton questioned this translation on the grounds that it is incorrect Greek syntax to pass over the proximate noun "mystery" which is the closest noun to the pronoun, in the text.

Virtually everything here is factually wrong, which anyone who has read and understood Newton's

*Two Corruptions*would easily see.

========================================

Now, if a

**textual writer flunks the elementary logic of statistical understanding**, are they likely to be strong in other areas of logical analysis? Are their textual theories all of a sudden going to become paradigms of logical excellence? Unlikely.

========================================

**THE PROBLEM OF AFFINITY CALCULATION**

Finding an agreed-upon method to measure the % of affinity between two texts, even two clearly defined printed texts, is a bit complex and dicey. Since the measurements used are subjective and variable

Questions arise, and the answers can be chosen by the statistician.

What is the standard size of comparison?

Maybe verses? or words?

How many variants is a 12-verse omission/inclusion?

And are you weighing variants?

And there can be a variety of results. This complexity, (quite a bit more sophisticated than choosing an irrelevant divisor) is rarely mentioned when affinity numbers are given in textual literature. Even if the numbers have some sense, like those of Maurice Robinson, unlike the Daniel Wallace numbers above. This is a more general critique of the use of numbers in the textual science. If there is an alternative between "God was manifest in the flesh.." and "who was manifest in the flesh", statistically it is only one word (Greek or English), barely a blip on the radar of such a study,

**yet that one little word has been a spiritual and textual and doctrinal battleground from 1700 to today.**Thus, the significance should be maximized, not hand-waved.

By contrast, for a three-way comparison of the nature of:

"The Peshitta supports a Byz-TR text about 75%, the Alex text about 25%"

it is easier to establish a sensible methodology that can be used with some consistency and followed by the readers and statistic-geeks quite easily.

Although even there the caution lights should be on, especially about the weight of variants, for which I offer a maxim for consideration:

"Variants should be weighed and not counted"

========================================

**Notes from How to Lie With Statistics - Statisculation**

If you can't prove what you want to prove, demonstrate something else and pretend that they are the same thing. In the daze that follows the collision of statistics with the human mind, hardly anybody will notice the difference. The semi-attached figure is a device guaranteed to stand you in good stead. It always has.

How to Lie With Statistics,Darrell Huff, p. 76, 1993 edition, Ch. 7 The Semiattached Figure

Advertisers aren't the only people who will fool you with numbers if you let them. - p. 79

Misinforming people by the use of statistical material might be called statistical manipulation;

in a word (though not a very good one), statisticulation. - - Ch. 9, p. 102 How to Statisculate

The title of this book and some of the things in it might seem to imply that all such operations are the product of intent to deceive. The president of a chapter of the American Statistical Association once called me down for that. Not chicanery much of the time, said he, but incompetence. - p. 102

the distortion of statistical data and its manipulation to an end are not always the work of professional statisticians. - p. 103

But whoever the guilty party may be in any instance, it is hard to grant him the status of blundering innocent. ... As long as the errors remain one-sided, it is not easy to attribute them to bungling or accident. - p 103

====================

Most of Darrell Huff's work involves numbers that actually are technically accurate, (thus the advertiser or politician can "defend" the numbers against charges of fraud.) However, they are skewed by issues like sampling size and selection bias and unknown comparisons and various other rigged and omitted elements.

The false methodology of Daniel Wallace, his method of using an unrelated number as the divisor, does not even reach the low bar of technically feasible, yet deceptive. That is why the emphasis here is on his statistical illiteracy, rather than manipulation. The kinder interpretation is that Daniel Wallace is simply incompetent statistically, not that he went out of his way to deceive.

As for the many professional textual scholars who read this paper over the years, or simply his peers, students and the public, who should have caught this blunder, the first quote above from Huff gives a partial explanation. A sort of blind faith and weariness that many have when they see a few numbers in an article, a glaze in the eyes. Especially when written by a

*scholar*whom they think of as competent.

====================

Last edited: