Status: Exploratory.

Dealing with Claims

We come across claims everywhere. We see them in websites like 80khours, when we are in a conversation, when we are reading books, when we watch Youtube etc… We can choose to accept these claims without questioning, or check the claims without blindly accepting.

The other day I was watching this video on Youtube and it was a comedy show on the democratic debate. If it wasn’t for an STM telling me to look deeper and showing me how to do it, I would have just believed that Joe Biden was a racist. His past was probably being dug out by some journalist and he was being exposed now. Let’s take a closer look at it though:

Kamala Harris (paraphrasing): “(Joe Biden (JB))[1] (praised the reputation of two United States senators)[2] (who built their careers on segregation of race)[3]”—Source

One of the claims from above: [1] praised [2].

Subject: What Joe Biden said.

Predicate: praised the reputation of two United States Senators …

Example of Subject: “At least there was some civility. We didn’t agree on much of anything. We got things done. We got it finished.”—Rolling Stone

Does the example match the definition given by the predicate? It doesn’t seem so. For “praised”, I would think of something like “The US Senators were great souls”. When I heard what Kamala said, I imagined something more serious. After I dug a bit deeper with just ONE EXAMPLE it didn’t seem like JB was actually praising anyone.

Thus we define Concrete Thinking (CT). Concrete thinking is the testing of labels (forming a claim), using an example to check against the given definition. We shall use CT to check the claims made by everyone.

One vs Zero

How do you know anything about the claim, when you don’t even have an example? When someone makes a claim it seems that giving one example instead of zero examples, aka CT, is better. Below are some examples:

  • CT gives the ability to falsify hypothesis

Claims: Venezuela is fine—Someone

Subject: Things happening in Venezuela

Predicate: Is fine.

Example of subject: People eating from the trash despite working a full time job.

Does it match the definition given by the predicate? It doesn’t seem so. All it took was one example to falsify the claim.

  • Labels are misleading

Claims: Trevor: “(Kamala)[1] is going to (wipe the floor with JB)[2]”—Source

Subject: What Kamala did!

Predicate: wipe the floor with JB

Example of Subject:

Harris’ average support jumped to 14.7% on Wednesday, up from 7% on June 25, the day before the two-day debate started. An average of 27.2% of respondents supported Biden as of Wednesday, a drop from 32.1% on June 25.—Source

Does the example match the definition given by the predicate? Wiping the floor with JB could mean that she atleast beat JB in Polls. According to the above data by CNBC, JB seems to still be leading KH by more than 10%. So the claim is false.

Until an STM pointed out, I was implicitly assuming Trevor wouldn’t mislead me. I kinda believed his claims because I thought he did his research. But when I started to dig in deeper I realized that I didn’t really know what he meant with “Kamala wiped the floor with JB”. I really thought Kamala totaled him, either in polls or in exposing who he truly was. But neither turned out to be true. When Kamala said that JB supported segregationists, I didn’t even know that I could question it. I almost immediately assumed that JB’s past was being brought to the light, just like it happened to trump with his scandals (“grabbing women by their pussy”). I didn’t want to wait. I had already passed my judgment.

Note: Even as I was giving examples I see that I am capable of being mislead. According to the New York Times, Kamala “surged” in three polls after the debate and Joe Biden “Fell”. This sounded to me like Kamala was overtaking JB in all polls. Until I saw, the actual numbers by CNBC which was contrary to my understanding of the labels. Words are quite misleading.

  • Labels are vague

At work I was dealt with some text which I have documented in this practice, under –> Work: BM with springs.

It so happened that when I first read the text, I felt “confused”, so many words I didn’t “understand”, I didn’t know what questions to ask to “clarify” until I CT’ed. One of the claims was,

Claims: Magnitude of this force is small.

Until I figured out that we were talking about a magnitude of 50N in our case, and that this magnitude would not damage the frame and hence was small, this statement meant absolutely nothing to me. Just one example seemed to “clarify” what the author was trying to tell. Labels are not only misleading but they are VAGUE.

When I applied CT to the whole document I ended up with only two claims that I needed clarification out of dozens. The CT process, seems to be some sort of formal way to “understanding” text.

  • 0 examples seems to be useless

So I was speaking to my brother recently and he seemed to think that he can’t buy a house in India because he would need a lot of money and wouldn’t get a loan for it. He was trying to convince me to pitch money for buying the house. When I simply asked him how much is this “lot of money” he would need for the mortgage, he didn’t even have an idea, even to +-300 dollars per month. He said he needs to check with his bank. My brother was talking in the air, he had no idea what he was talking about, i.e., he had 0 examples. 0 examples were useless in convincing anyone.

Clearly one example is much better than 0 examples. What about more than one example?

One vs Many

Journals are typically solid proof of a claim. For example, the claim “intelligence is not connected to skill” was confirmed after looking at approximately 45000 samples and inferring from correlation of intelligence and skill. For claims related to medicine (“Radiation Therapy cures certain types of Cancer”), it seems like we do need journal level studies. Luckily for us there are people around the world working on these type of things. Looking at one example in these cases, does not seem to verify the claim. However it still helps to look at examples just to think concretely as ‘Labels are vague’. For example,

Claims: Talent is not something you are born with

Example: We think of Jerry Rice, who was passed over fifteen times before being drafted into NFL. Despite this, he is known today as the greatest NFL player beating his runner-up by 50% in some stats.

But to get journal level proof for every single claim we come across (like this Trevor Noah video), takes a lot of time and effort and is not practical. For example, within 1-2 minutes of this video, we have atleast five claims, which certainly need to be checked for correctness as we have seen above that people just fart out claims. If we try to procure evidence with journal level rigor it can take a lot of time and effort.

In the previous section we have seen that quite a lot can be achieved from giving just ONE EXAMPLE and it takes a few minutes per claim. So for now we stick to it and try to think concretely and the journal level option is not practical. We are yet to understand when we might might need journal level proof and where we might be mislead by ONLY GIVING ONE EXAMPLE. (For later!).

Dealing with claims by Reasoning

Having to test everything is time consuming albeit with one example even. Sometimes we might not even be able to find examples and then what do we do? Maybe, reasoning is perfectly capable of verifying claims, hopefully faster. Let’s look at an example.

Claims: (God)[3] wants you to not (eat onions)[1]

Reasoning: (Onions)[1] make (you passionate about food)[2] And (God)[3] doesn’t want (you to be passionate about food)[4]. So God wants you to not eat onions.

Makes sense! Right? But let’s try to take a deeper look at this. [1] leads to [2]. [3] doesn’t want [2]. We can infer from this that [3] doesn’t want [1]—which is the actual claim. So far so good!

But is [1] true though? Is [2] true though? [1] and [2] are but claims themselves, which means we do not know whether to accept them or reject them. In that case how can I jump the gun on if “[3] doesn’t want [1]”? Furthermore, I am unable to test either of the claims given in the reasoning.

Conclusion: *Reasoning does not seem to work when it contains claims which cannot be tested.

Corollary: All claims need to be testable.

Let’s look at an example from 80khours:

Claims: Human Civilization is at stake (due to AI)—80khours

Reasons: (The fate of Gorillas)[1] currently depends on the (actions of humans)[2]. (Similarly, the fate of humanity)[3] may come to depend more on the (actions of machines than our own)[4].

Statement A: [1] depends on [2] and [2] is “much better” than [1]. Statement B: [3] may depend on [4] if [4] is “much better” than [3].

Statement A can be tested and let’s assume it is true. If Statement B is true then we assume, that the main claim is satisfied. Statement B cannot be tested though. The implied claim, ‘if A is true, then B is true’, also cannot be tested. In conclusion, we don’t know if “Human Civilization is at stake (due to AI)” as a result of the reasoning.

The above reasoning is what I gave a few weeks back to support the above claim. But now I realize that it doesn’t absolutely do anything to support the claim. It felt right though!

Conclusion: People can come up with all sorts of claims—that “feels” right—as they like, but unless it can be tested, we are not going to be able to say anything about the claim.

So, the above reasoning didn’t give us any information about the claim, but then maybe we should just use CT on the main claim “Human Civilization is at stake”.

Subject: Human Civilization

Predicate: is at stake

Luckily for us 80khours already gives an example, which we need to check against the definition/Predicate.

Example: A pharmaceutical company that uses machine learning (ML) algorithms to synthesize drugs to cure cancer could result in human extinction. (It could turn out that the ML algorithm, found that the most effective way of reducing cancer rates was to kill the humans before they could grow old enough to develop cancer)[5].

We check this against the definition. And yeah if the example happens, it could kill more than 17 million people, civilization could be at stake. But it must be noted that [5] is but a claim which cannot be tested. It is a hypothetical example. As we have seen above, we do not respect claims that can’t be tested. This implies that this claim should be treated in the same level as “God wants us not to eat onions”, i.e., we should throw it in the trashcan and move on because it cannot be tested. Contrast this to the following:

Claims: Human Civilization could be at stake due to Diseases (in the future)

You can’t test could be. But that’s OK. The closest we can come to testing it seems to be with an example from the past; just like predicting which bolts will fail based on data from the past.

Example: In the past we have seen the Black Plague wipe out half of the people of Western Europe.

There seems to be atleast one example which matches the definition that the Civilization could be at stake.

In conclusion, we see that people can give all sorts of claims, reasons and hypothetical examples, but as long as we can’t test it with atleast ONE EXAMPLE FROM THE PAST, we do not know anything about the claim.

We have been talking about having “one example atleast” for sometime now. Having an example for something, does not mean that the claim is true! Then what is the point of giving that ONE EXAMPLE. Why not two? Why not journal level studies?

Where all do we need concrete thinking

It seems like CT is required everywhere where there are labels. To even know what someone is talking about, we need CT. People can say “JB praised some bad US senators”. And until we see an example, we should reserve our judgment about the claim. There seem to be two cases where we don’t require CT.

  • Statements of Observation (rawdata)

    For example, I got up in the train and hanged the bag onto the hanger; A santro car crashes into the pillar right in front of me.

    There seem to be no labels/claims, and as a result no CT.

  • Statements that are already concrete

    “(Mozart)[8] scored a (130 percent on the precocity index)[9] whereas (his current contemporaries)[10] scored (thirty to five-hundred percent)[10a].”

    This is already a concrete example for the claim: “Mozart isn’t a lot more precocious than modern child pianists”. There seem to be no labels/claims.

Labels are misleading. If there are no labels, then there seems to be no point in going further.

The failure cases

As per DP routine discussed here, we would like to tabulate and identify where we are making mistakes and where we are not, and look at how to improve our overall score. We see that this tabulation can be done under the following categories:

A) Guessing something is right or wrong or we don’t know.

B) you can test the claim or forget to test the claim

C) The claim could actually be right or actually be wrong

This gives us twelve categories where we could sort our answers to claims which allows us to see our score. We recollect how while watching the Trevor Noah video, I had failed to test “Kamala swept the floor with JB”. I didn’t even realize it. The goal is to improve the score by working on where I fail most. With the next DP session we shall create a table at the end to see what our score is.

Summary

Concrete thinking is the testing of labels (forming a claim), using an example to check against the given definition. We shall use CT to check the claims made by everyone. It is important to give atleast one testable example for each claim as Labels are Misleading.

P.S

This essay took 22 hrs of writing and re-editing.

Later

  • one vs two vs three vs many
  • We are yet to understand when we might might need journal level proof and where we might be mislead by ONLY GIVING ONE EXAMPLE.
  • How valuable is CC
  • To what extent do we give examples