Introduction

Goal is to take around one claim per chapter and write an example for a claim made in that chapter.

And maybe one real life explanation of how this chapter is useful?

Making beliefs pay rent in anticipated experiences

So here Eliezer Yudkowsky, talks about how beliefs should behave. He says that beliefs should pay rent in anticipated experiences. Beliefs should predict or prohibit sensory experience. Otherwise, it is useless.

Also, there are beliefs that are not direct anticipations of sensory experience, but which are still true. E.g., Atoms exist, even though they are not direct anticipations of sensory experience.

And these propositional statements (gravity is 9.8m/s^2) have the ability to eventually come to a sensory experience through say experiments or tests (e.g., you drop a ball from 80m high and it will take a stop watch ~5s to register the fall, which we EXPERIENCE through our eyes.

Above all, don’t ask what to believe—ask what to anticipate. Every question of belief should flow from a question of anticipation, and that question of anticipation should be the center of the inquiry. Every guess of belief should begin by flowing to a specific guess of anticipation, and should continue to pay rent in future anticipations. If a belief turns deadbeat, evict it.

What is anticipated experience?

Something that you can hear or feel by touch or see etc. Taste, touch, hear, smell, sight.

Belief

  • Earth’s gravity is 9.8m/s^2
  • This building is around 120 m tall.
  • The ball takes 5 seconds to fall on the ground when you look at a clock
  • There is a unicorn outside my door

how is this useful

Claims and examples

Claim: Some beliefs that don’t result in anticipated experiences are useless.

Example-sub: There is a dragon outside my house, you can’t see it feel it or anything else. Whether the dragon is there or not you will not see any difference in any measuring device or if you pour your water etc.

Definition: This does not predict anything or help constrain experience in any form.

Checklist: sub; Yes; pre; Yes; ecm; Yes;

Time:


Claim: Beliefs that result in anticipated experiences are “somehow useful”.

Subject: Beliefs that anticipate experiences

Predicate: are useful.

Example-sub: Tree falling on the ground makes a noise, which we can experience, which we can test. On the other hand a belief such as , “There is a dragon in my backyard but you can’t see it, feel it touch it, or experience it in any other way”, we can’t experience it or test it.

Definition: Yes

Checklist: sub; Yes; pre; Yes; ecm; Yes;

Time: 5-10


Claim: There are beliefs that are not an anticipation of sensory experience but still doens’t mean it’s not reality.

Example-sub: Atoms exist

Definition: Can’t touch, feel, taste etc. But atoms are pnn!

Example-sub: The building next door is 80 feet

Definition: Can’t touch feel or “see” this, but it is 80 feet.

Checklist: sub; Yes; pre; Yes; ecm; Yes;

Time: 5-10


Claim: Beliefs that do not anticipate sensory experience can have an inferential consequence that is a direct sensory anticipation.

Example-sub: Earth’s gravity is 9.8m/s^2,

Definition: Implies that when we drop a ball from 80 feet and look at a stop watch, we can see the watch show 5 seconds (direct sensory anticipation).

Checklist: sub; Yes; pre; Yes; ecm; Yes;

Time: 5-10


Belief in Belief

Beliefs such as “gravity is 9.8m/s^2”, is not a belief which is a direct anticipation of sensory experience. It’s “propositional”. Its a bunch of words. But this belief leads to an inferential consequence that is a direct sensory anticipation (if you drop a ball from 120m you will see that the ball takes 5 seconds when “measured” using a stop watch). This we accept as seen in the last essay.

When a belief cannot be tested, does not constrain any experience, and does not lead to any inferential consequence which is a direct sensory anticipation, then we call that belief a floating belief (“there is a dragon in my garage”) and not connected to any other beliefs which is a direct sensory anticipation.

As Daniel Dennett observes, where it is difficult to believe a thing, it is often much easier to believe that you ought to believe it.

What is belief in belief?

You can much more easily believe that it is proper, that it is good and virtuous and beneficial, to believe that the Ultimate Cosmic Sky is both perfectly blue and perfectly green. Dennett calls this “belief in belief.”

Belief of a friend from school: It is good to not drink alcohol.

Belief in Belief: It is virtuous to believe that is not good to drink alcohol.

No one will say I don’t believe that it is not good to drink alcohol, but I believe I ought to believe it (as it is virtuous).

This seems like a floating belief, it is not connected to any belief that is a direct anticipation of sensory experience.

belief: God is omnicient and omnipotent

belief in belief: I should believe that god is omnicient and omnipotent

belief: Vaihnavism is the only true religion

belief in belief: It is good to believe that vaishnavism si the only true religion.

Professing and Cheering

What is professing and cheering

Rather, by launching into a five-minute diatribe about the primordial cow, she was cheering for paganism, like holding up a banner at a football game. A banner saying GO BLUES isn’t a statement of fact, or an attempt to persuade; it doesn’t have to be convincing—it’s a cheer.

Belief as attire (belief as group identification)

I have so far distinguished between belief as anticipation-controller, belief in belief, professing, and cheering. Of these, we might call anticipation-controlling beliefs “proper beliefs” and the other forms “improper beliefs.” A proper belief can be wrong or irrational, as when someone genuinely anticipates that prayer will cure their sick baby. But the other forms are arguably “not belief at all.”

Belief as attire: Believing everything that the religion says once you have identified with it.

Soccer fans often can be seen fighting with each other (cause they believe the other team “sucks” I guess). This whole rivalry to the point of physically fighting, seems to be a belief that the fans (fanatics) share, that once they have identified with the tribe they buy into all the things of the tribe.

Suicide bombers believe a ton of things (its virtuous and heroic and justified and right to die for islam), possibly because they identified themselves with the religion and believe everything about it.

So far

I don’t think the last few essays are too useful or rather, I don’t know how to apply it to daily life. But lets move on further to other parts of this book.

Pretending to be wise

What does it mean to pretend to be wise?

E.g.,

A principal passing judgment and being neutral about a fight and caring to end it as it is an inconvenience for him, without summing up the evidence or who did what first, is an example of pretending to be wise.

A similar dynamic, I believe, governs the occasions in international diplomacy where Great Powers sternly tell smaller groups to stop that fighting right now. It doesn’t matter to the Great Power who started it—who provoked, or who responded disproportionately to provocation—because the Great Power’s ongoing inconvenience is only a function of the ongoing conflict. Oh, can’t Israel and Hamas just get along?

This I call “pretending to be Wise.” Of course there are many ways to try and signal wisdom. But trying to signal wisdom by refusing to make guesses—refusing to sum up evidence—refusing to pass judgment—refusing to take sides—staying above the fray and looking down with a lofty and condescending gaze—which is to say, signaling wisdom by saying and doing nothing—well, that I find particularly pretentious.

Paolo Freire said, “Washing one’s hands of the conflict between the powerful and the powerless means to side with the powerful, not to be neutral.”1 A playground is a great place to be a bully, and a terrible place to be a victim, if the teachers don’t care who started it.

Appearing like you are taking the high ground, by taking a neutral stance in a fight, refusing to sum up evidence and pass judgment is defined as “pretending to be wise”.

Likewise with policy questions. If someone says that both pro-life and pro-choice sides have good points and that they really should try to compromise and respect each other more, they are not taking a position above the two standard sides in the abortion debate. They are putting forth a definite judgment, every bit as particular as saying “pro-life!” or “pro-choice!”

I don’t know what to do with this though, for real!

Applause Lights

What does applause lights mean?

The substance of a democracy is the specific mechanism that resolves policy conflicts. If all groups had the same preferred policies, there would be no need for democracy—we would automatically cooperate. The resolution process can be a direct majority vote, or an elected legislature, or even a voter-sensitive behavior of an artificial intelligence, but it has to be something. What does it mean to call for a “democratic” solution if you don’t have a conflict-resolution mechanism in mind?

I think it means that you have said the word “democracy,” so the audience is supposed to cheer. It’s not so much a propositional statement or belief, as the equivalent of the “Applause” light that tells a studio audience when to clap.

Vague

Eliezer Yudkowsky says how to identify Applause Lights statements:

We need to balance the risks and opportunities of AI.

If you reverse this statement, you get:

We shouldn’t balance the risks and opportunities of AI.

Since the reversal sounds abnormal, the unreversed statement is probably normal, implying it does not convey new information.

I am not sure about what this “abnormal” thing is or what he means by “it does not convey new information”.

To me what I “understand” is that, we should not fall for such applause light statements that add NO VALUE or convey additional information to us, other than to make an audience cheer or clap.

Focus your uncertainty

How do you decide what to focus your limited resources on? in real life situations where there is no probability?

You’re pretty sure you weren’t taught anything like that in your statistics courses. They didn’t tell you what to do when you felt so terribly uncertain. They didn’t tell you what to do when there were no little numbers handed to you. Why, even if you tried to use numbers, you might end up using any sort of numbers at all—there’s no hint what kind of math to use, if you should be using math! Maybe you’d end up using pairs of numbers, right and left numbers, which you’d call DS for Dexter-Sinister . . . or who knows what else? (Though you do have only 100 minutes to spend preparing excuses.)

If only there were an art of focusing your uncertainty—of squeezing as much anticipation as possible into whichever outcome will actually happen!

But what could we call an art like that? And what would the rules be like?

What is evidence?

What is evidence? It is an event entangled, by links of cause and effect, with whatever you want to know about. If the target of your inquiry is your shoelaces, for example, then the light entering your pupils is evidence entangled with your shoelaces. This should not be confused with the technical sense of “entanglement” used in physics—here I’m just talking about “entanglement” in the sense of two things that end up in correlated states because of the links of cause and effect between them.

Event –> Light entering your pupils

What you want to know about –> shoelaces

“entanglement” –> correlated

Not every influence creates the kind of “entanglement” required for evidence. It’s no help to have a machine that beeps when you enter winning lottery numbers, if the machine also beeps when you enter losing lottery numbers. The light reflected from your shoes would not be useful evidence about your shoelaces, if the photons ended up in the same physical state whether your shoelaces were tied or untied.

We want evidence that changes states with changes in the “whatever we want to know about”, i.e., shoelaces

If your eyes and brain work correctly, your beliefs will end up entangled (correlated) with the facts.

So try to explain why the kind of thought processes you use systematically produce beliefs that mirror reality. Explain why you think you’re rational. Why you think that, using thought processes like the ones you use, minds will end up believing “snow is white” if and only if snow is white. If you don’t believe that the outputs of your thought processes are entangled with reality, why believe the outputs of your thought processes? It’s the same thing, or it should be.

Suppose that your good friend, the police commissioner, tells you in strictest confidence that the crime kingpin of your city is Wulky Wilkinsen. As a rationalist, are you licensed to believe this statement? Put it this way: if you go ahead and insult Wulky, I’d call you foolhardy. Since it is prudent to act as if Wulky has a substantially higher-than-default probability of being a crime boss, the police commissioner’s statement must have been strong Bayesian evidence.

Police Commissioner’s observations on the night of April 4th. YES.

Police Commissioner’s statement that Wulky Wilkinsen is the crime kingpin of the city. NO.

Rational Evidence

Claim: All legal evidence is “ideally” rational evidence but not the other way around.

Above statements as examples

It is really unclear what this “rational evidence”! Let’s say for now that it is something that you can act on. Like the Police commissioner’s statement that Wulky Wilkinsen is the crime Kingpin of the city.

Is a rationalist licensed to believe in the historical existence of Alexander the Great? Yes.

Scientific evidence

Needs to be testable in the future and tested.

e.g., evidence from open or closed 20k$ per year journals.

Historical knowledge is not scientific knowledge. NO

It may seem perverse to deny the adjective “scientific” to statements like “The Sun will rise on September 18th, 2007.” YES

The ultimate difference

As I write this sentence at 8:33 p.m., Pacific time, on August 18th, 2007, I am wearing white socks. As a rationalist, are you licensed to believe the previous statement? Yes. Could I testify to it in court? Yes. Is it a scientific statement? No, because there is no experiment you can perform yourself to verify it.

WTF

But imagine that you’re constructing an experiment to verify prediction #27, in a new context, of an accepted theory Q. You may not have any concrete reason to suspect the belief is wrong; you just want to test it in a new context. It seems dangerous to say, before running the experiment, that there is a “scientific belief” about the result. There is a “conventional prediction” or “theory Q’s prediction.” But if you already know the “scientific belief” about the result, why bother to run the experiment?

How much evidence does it take?

Previously, I defined evidence as “an event entangled, by links of cause and effect, with whatever you want to know about,” and entangled as “happening differently for different possible states of the target.” So how much entanglement—how much rational evidence—is required to support a belief?

e.g.,

Let’s start with a question simple enough to be mathematical: How hard would you have to entangle yourself with the lottery in order to win? Suppose there are seventy balls, drawn without replacement, and six numbers to match for the win. Then there are 131,115,985 possible winning combinations, hence a randomly selected ticket would have a 1/131,115,985 probability of winning (0.0000007%). To win the lottery, you would need evidence selective enough to visibly favor one combination over 131,115,984 alternatives.

Using Bayes theorem, prior odds and posterior odds,

A better way of viewing the problem: In the beginning, there is 1 winning ticket and 131,115,984 losing tickets, so your odds of winning are 1:131,115,984. If you use a single box, the odds of it beeping are 1 for a winning ticket and 0.25 for a losing ticket. So we multiply 1:131,115,984 by 1:0.25 and get 1:32,778,996. Adding another box of evidence multiplies the odds by 1:0.25 again, so now the odds are 1 winning ticket to 8,194,749 losing tickets.

It so happens that 131,115,984 is slightly less than 2 to the 27th power. So 14 boxes or 28 bits of evidence—an event 268,435,456:1 times more likely to happen if the ticket-hypothesis is true than if it is false—would shift the odds from 1:131,115,984 to 268,435,456:131,115,984, which reduces to 2:1. Odds of 2 to 1 mean two chances to win for each chance to lose, so the probability of winning with 28 bits of evidence is 2/3. Adding another box, another 2 bits of evidence, would take the odds to 8:1. Adding yet another two boxes would take the chance of winning to 128:1.

So if you want to license a strong belief that you will win the lottery—arbitrarily defined as less than a 1% probability of being wrong—34 bits of evidence about the winning combination should do the trick.

  • this was very “math heavy”. Good post to look back again tomorrow and do the math your self.

Of course, you can still believe based on inadequate evidence, if that is your whim; but you will not be able to believe accurately. It is like trying to drive your car without any fuel, because you don’t believe in the fuddy-duddy concept that it ought to take fuel to go places. Wouldn’t it be so much more fun, and so much less expensive, if we just decided to repeal the law that cars need fuel?

My summary

Lottery with 6 numbers has a combination of 131,115,985 possibilities. (70!/6!/64!).

If you have a box that can tell the winning combination (if punched), 100% of the time and if the losing combination is punched it beeps 1 time over every 4 times. i.e., when you give 20 losing combinations the box things says 5 of them are winning combinations.

likelihood ratio –> P(beeping with winning combination) vs P(beeping with losing combination) –> 1/0.25

  • Odds of winning just as is –> 1:131,115,984
  • Odds of winning with one time sending through box with likelihood 4:1 –> 1:32,778,996
  • Odds of winning with 2 boxes with likelihood of 4:1 (and independent) where you test a combination twice. (Because they are independent events you can multiply their probabilities).

Probability of beeping with loosing combination is thus 1/16, i.e., likelihood ratio is 16:1.

  • Odds of winning with 2 boxes with likelihood of 1:0.125 –> 1:8,194,748

How many boxes do we need to know the winning combination?

What are the eventual odds you want? Say 1:1 (50% chance of winning), then you need logbase4(131,115,984) boxes = 13.5 boxes i.e, 14 boxes.

The use of log becomes apparent, as you are dealing with powers.

What are the eventual odds to generate a “strong belief” that will win the lottery–arbitrarily defined as less than 1% probablity of being wrong, i.e., 99% chance of winning, the you need.

  • prior odds x likelihood ratio = eventual odds ratio

  • 1/131,115,984 x (1/(1/4))^n = 99/1

n = 17 boxes

In general, the rules for weighing “how much evidence it takes” follow a similar pattern: The larger the space of possibilities in which the hypothesis lies, or the more unlikely the hypothesis seems a priori compared to its neighbors, or the more confident you wish to be, the more evidence you need.

If you use 10 boxes then the eventual odds you have are: 1:125

You cannot stop on the first combination that gets beeps from all 10 boxes, saying, “But the odds of that happening for a losing combination are a million to one!

That’s just not taking into account the prior odds. The likelihood is million to 1 (4^10:1).

Of course, you can still believe based on inadequate evidence, if that is your whim; but you will not be able to believe accurately. It is like trying to drive your car without any fuel, because you don’t believe in the fuddy-duddy concept that it ought to take fuel to go places. Wouldn’t it be so much more fun, and so much less expensive, if we just decided to repeal the law that cars need fuel?

Understanding bits

It so happens that 131,115,984 is slightly less than 2 to the 27th power. So 14 boxes or 28 bits of evidence

This is referring to the mathematical bits. To represent the above number you need so many bits. To represent 8 you need 4 bits for example.

https://www.lifewire.com/how-to-read-binary-4692830

Einsteins arrogance

1/100m prior odds to 1/1 post odds (50% prolty) you need a shit ton of evidence. How much of a shit ton? logbase2(100m) ~=27.

To assign more than 50% probability to the correct candidate from a pool of 100,000,000 possible hypotheses, you need at least 27 bits of evidence (or thereabouts). You cannot expect to find the correct candidate without tests that are this strong, because lesser tests will yield more than one candidate that passes all the tests. If you try to apply a test that only has a million-to-one chance of a false positive (~ 20 bits), you’ll end up with a hundred candidates. Just finding the right answer, within a large space of possibilities, requires a large amount of evidence.

Discussion with an STM 2021 dec

General and basic

  1. How did I treat myself?
  2. work Sanjose
  3. flossing, meditation, cryonics, cooking, car Yes No type questions
  4. fitness goals, Sports, cheat meals, long life
  5. Injuries update
  6. Dvorak or not?
  7. Upgrade Ubuntu what do you do?
  8. Emacs tools that you use? what all do you use emacs for? special modes?
  9. Standing desk? back pain?
  10. Relationships!
  11. Chrome is fucking slow after I have 2 youtube tabs (32 ram 8gb)
  12. Cryonics
  13. Anki cards?
  14. cars

Skills

  1. How do you spend your time and on what?
  2. What are you DPing now?
  3. What are your goals?
  4. What should be my goals?
  5. I am reading Eliezer Yudkowsky Essays, and tried reading HPMOR more than 400 pages but am not sure “I am learning something”. What is a better way to approach “becoming better at rationality”? What to practice about “rationality”, claims and examples? what do you do?

  6. My plan to Deliberate Practice in Analytics. Got to job find people and pursue tings there?
  7. My plan to Deliberate Practice in “rationality”. no idea? read more essays?

EA

  1. how are you doing EA?
  2. To whom are you giving? why? What is the bigger plan?
  3. Dropping to 1% for the coming years.
  4. Thinking what sort of goals should I have and do you have?
  5. What do you do for persuasion? Do you feel sad? are you ever depressed? Bored? How do you deal with it? What do you think about therapy?
  6. How do you feel motivated to work longer hours? Grind day in and day out.

Masters

  1. 6 month to 2 years exp + 1-2 year master, thoughts
  2. How to identify if unis have Assistantship and the probabilities? How did you do it?
  3. More older “difficult” to do masters?
  4. Masters prolly gonna take a loan.

Most important

  1. Goals and what to work on in the coming months