Issues

Could be issues hypothetical or from past

There are (many issues)[1] we haven’t been able to look into yet, so we expect there are other (high-impact areas we haven’t listed)[3].

Claim: There could be other [3].

Question: Could there be other [3]?

Example A: Until a few years back 80khours thought that the best places to work on were “reducing near-term life risks aka reducing global health risks” but when they explored that there were global catastrophic risks that could kill the entire planet and future generations, they have now changed their stance on where people should be working considering the impact.

Example Hypothetical: If medical research into ‘how to slow aging’ seems largely promising (90% chance of making it with 10b $ with a 100 people extra), in delivering a mechanism that doubles the human life expectancy, it could be beneficial to work on it as it could save 95% * 7b expected people lives/100 = 66m expected people lives per person working on it

In this case, I could give a hypothetical example or an example from the past, which is what I have done here. Can you comment?


Because

Claim: Due to [1], it is good to work on [2].

because

We skip these with just that word there

Given AB

Given our take on (the world’s most pressing problems)[1] and the (most pressing bottlenecks these issues face)[2], we think the following (five broad categories of career)[3] are a good place to (start generating ideas)[4] if (you have the flexibility to consider a new career path)[5].

Claim: Given [1] and [2], it appears that following [3] is a good place for [4].

Question: Is following [3], a good place for [4]?

I am not sure how to give an example Given [1] and [2]. So I skip this for now.

Bad writing vague words

Given our take on (the world’s most pressing problems)[1] and the (most pressing bottlenecks these issues face)[2], we think the following (five broad categories of career)[3] are a good place to (start generating ideas)[4] if (you have the flexibility to consider a new career path)[5].

Claim: It appears that following [3], is a good place for [4].

Question: Is following [3], a good place for [4], if [5]?

What does 4, even mean? What is the point of it? Should I bother myself or skip these type of seemingly shit sentences. Who cares about generating ideas? why?

In general

(Many of the top problem areas we focus on)[1] are mainly (constrained by a need for additional research)[2], and we’ve argued that (research)[3] seems like a high-impact path in general)[4].

Claim: [3] seems like [4].

Example: Working in MIRI as a researcher could save 57k lives and has a bang-for-the-buck as compared to Climate change (about 1000 times better).

I don’t know what general means so I skip it!

Contrasting

You can also (support the work of other researchers)[1] in a (complementary role, such as a project manager, executive assistant, fundraiser or operations)[2]. We’ve argued (these roles)[3] are often neglected, and therefore especially high-impact. It’s often useful to have (graduate training in the relevant area)[4] before taking these roles.

Claim: It is good to [1] in [2].

Example: As discussed earlier, AI safety is really quite neglected with 100 people working on it with 10m $. Neil Bowerman from 80khours is trying to add people required to fill the “talent gaps”. If Neil is able to add 10 more people and even claim 1% of their total impact that would be 570 lives saved just for his work in a few years. Contrast that to a DS job which saves 400 people

I think it is important to contrast it with something otherwise it is hard for someone to understand if it is good or bad. Agree: to always contrast?

How useful

You can also (support the work of other researchers)[1] in a (complementary role, such as a project manager, executive assistant, fundraiser or operations)[2]. We’ve argued (these roles)[3] are often neglected, and therefore especially high-impact. It’s often useful to have (graduate training in the relevant area)[4] before taking these roles.

Claim: It is useful to have [4] before [3].

Example:

Split:

For [4] before [3]: Neil Bowerman has a PhD (equivalent) in Physics, where he worked on existential risks of extreme climate change with a focus on providing emission targets.

Also Sean O hEigeartaigh, from CSER has a PhD in Genome Evolution, he is also known to increase the number of people at FHI and secure rougly 3m $ in funding. Now he is completely in operations such as grantwriting, fundraising, long-term planning etc…

Not sure how “useful” [4] is before [3]! or how to even go about answering it!

This seems to be exactly what happened with Rob Mather from AMF who claimed that his sales experience helped him.

facts

(Mozart)[8] scored a (130 percent on the precocity index)[9] whereas (his current contemporaries)[10] scored (thirty to five-hundred percent)[10a].

Claim: [8] scored [9].

Example: All this probably requires is a citation?

if then proof-type statements

If talent existed and (refused to show itself even after so many years of life)[6], it beckons if (inate ability)[7] (talent) even exists.

Claim: If [7] exists and it [6], then [7] doesn’t exist

Example: unable to give examples for this if-then/proof-type statements

Could lead to

(These advances)[5] could lead to (extremely positive developments, presenting solutions to now-intractable global problems)[6], but they also pose (severe risks)[7]. (Humanity’s superior intelligence)[8] is pretty much the sole reason that (it is the dominant species on the planet)[9]. If (machines surpass humans in intelligence)[10], then just as the fate of gorillas currently depends on the actions of humans, the (fate of humanity may come to depend more on the actions of machines than our own)[1].

Claim: [5]/[3]/[1] could lead to [6].

Example:

AlphaGo, identified superior ways of playing GO which were previously considered rubbish by humans for thousands of years. Computers seem like they can go beyond what humans can see with years and years of work, within just a year. Similarly, it could be possible to cure cancer and other diseases.

how to do I give an example for “could lead to”? I don’t think I have given one above!”

if then

(These advances)[5] could lead to (extremely positive developments, presenting solutions to now-intractable global problems)[6], but they also pose (severe risks)[7]. (Humanity’s superior intelligence)[8] is pretty much the sole reason that (it is the dominant species on the planet)[9]. If (machines surpass humans in intelligence)[10], then just as the fate of gorillas currently depends on the actions of humans, the (fate of humanity may come to depend more on the actions of machines than our own)[1].

Claim: if [10] then [1].

I have no idea how to answer this claim, how do I give an example that will inform if A then B.

Entire article from start to finish challenge

At 80,000 Hours, we (help people find careers that more effectively ‘make a difference’)[1], ‘do good’, or ‘have a positive impact’ on a (large scale)[2].

Question: Does 80khours do [1] on [2]?

Example: Here we see studies after studies about how peoples plans “amazingly changed” to something else. Zero fucks seem to be given about some numbers that would say things like, “if he continued he would have got X impact, but he didn’t and look at his current impact! Suck it”. Maybe there is more explanation later.

Making a difference on a large scale maybe thought of as things like saving 1000 people over a lifetime. I will take it as a win then? or

Here, we lay out what we mean by these phrases. In a nutshell ‘(making a difference)[3]’ is about (promoting the long-term welfare of everyone)[4] in ways that (respect the rights of others)[5].

Not a claim but a definition.

Split:

For [4], what does it mean? increasing the life expectancy? in underdeveloped countries?

From AMF website, we think of decreasing the number of deaths from 1.5 million to 0.5 million and allowing 1 million people to live. All this per year!

For [5], I am not really sure what sort of example would cut it. Right to pray? right to eat? right to live in their environment, right to healthcare, right to cheap education.

But I still don’t know what it means to do [3] in ways of [4]. I skip it for now. Not worth spending time on this shit!

Example:

This section also sketches out some of the ethical considerations that inform our advice. Much of our advice doesn’t entirely depend on these views, but we think it’s important to be transparent about them. If you want to read our practical suggestions about which global problems and careers to focus on, skip ahead.

So we skip the whole thing above, and we deal with it as it comes in the passage. I am not interested if they have actually detailed something out in the

impartial concern

When it comes to (making a difference)[1], we aim to be (impartial)[2] in the sense that we give (equal weight to everyone’s interests)[3]. This means we strive to (avoid bias against people based on their race, gender, sexuality or other identities)[4]. We also try not to (privilege any particular place, nation, time, or even species above any other)[5].

Question: Does 80khrs aim to give [3]?

Example: I will skip this, and move to the next sentence that has more concrete explanations of what they mean with “[3]”.

Aim implies that an example needs to be provided which aims at being impartial, but needn’t succeed?

Question: Does 80khours strive to avoid bias against race?

Example: 80khours supports to give donation to Effective Altruism Foundation which in turn pumps funds to GiveWell, which pumps funds to reduce deaths in sub-saharan Africa. This only shows that it is not biased against black people?

Question: Does 80k hours strive to avoid bias agains people based on gender?

Example: 80khours supports donations to sub-saharan africa where gender doesn’t come in the picture.

Is this an example that is enough?

Question: Does 80khours try not to [5]?

Split:

What does above any other even mean?

Instead, we aim to have (moral concern for the interests of all sentient beings in proportion to how much they will gain or lose by our actions.)[6] This includes (those who are far away from us)[7] as well as (those who will (potentially) be members of future generations)[8].

Question: 80khours has [6].

Example: 80khours support donations to GiveWell (indirectly via EAF) which primarily pumps fund to sub-saharan Africa. GiveWell looks at the number of lives saved per $ spent to identify the best charities and pumps money to these charities. It happens that in sub-saharan africa the $ value per life saved is the highest. This way they don’t focus on everyone in need, but just the people in need that they can help the most.

Question: Does the people 80khours focus on, [7]?

Example: 80khours is based in California but tries to focus on health in poor countries (sub-saharan africa).

Question: Does 80khours focus on [8].

Example: 80khours rates AI danger, climate change and also factory farming as top priority problems as it has the potential to end the world completely reducing the population to 0. which will probably be tolerable for this generation, but it’s really scary what happens to the rest of the world.

From this perspective, we aim to increase (the expected welfare of others)[9] by (as much as possible)[10], (enabling more individuals to have lives that are long, healthy, fulfilled, and free from avoidable suffering)[11].

Question: Does 80k aim to increase [9]?

Example: 80k tries to get people to donate to GiveWell which inturn pumps funds to AMF, which has reduced the number of deaths in sub-saharan africa.

Question: Has 80k, done [11]?

Example: 80k tries to get people to donate to GiveWell which inturn pumps funds to AMF, which has reduced the number of deaths in sub-saharan africa.

As individuals, (we all have other goals)[12] besides (impartially making a difference in this way)[13]. (We care about our friends, personal projects, other moral aims, and so on)[14]. But we think the (impartial perspective)[15] is an important one, and it’s what our (research and recommendations)[16] are focused on.

Question: Do we ALL care about other things, besides [13]?

Example: Am like what does goal mean? based on the definition of goals, can the claim be true or not true. then once you read the rest I guess they are talking about caring!

I care about making a significant design contribution like the entire layout of the stage, for my work besides EA.

Question: [15] is an important one?

Example: GiveWell suggests to put money in sub-saharan africa to cure malaria and save X number of lives for Y $. GiveWell doesn’t support the money being used to solve homelessness in the US for example. Thereby GiveWell Saves more lives.

Question: 80k’s research is focused on [15].

Example: 80k, in it’s important problem profiles, treats ‘health in poor countries’ as well as ‘factory farming and its inhumane treatment of animals’.

longtermism (checking and correcting)

(Homo sapiens)[0] is still an (infant species)[1]. We evolved around 200,000 years ago, and industrial civilization only began several hundred years ago; however, the average species lasts for 1-10 million years. With the (benefit of technology and foresight)[5], humanity could in principle survive for at least as long as the earth is habitable — (probably hundreds of millions of)[6] years.

Question: Is [0], [1]?

Example: Humans are around since 200k years ago. Average species lasts for 1 to 10 million years.

Question: With [5], we could survive for [6], years.

Example: If technology is able to solve potential existential risks like, climate change, meteor crashing into the earth, and also able to eradicate life shortening diseases like malaria, cancer, immunity syndromes etc… then potentially we can live as long as the earth is habitable.

How do you give an example of something you predict in the future?

The (possibility of a long future)[1] means there will, in expectation, be (far more people in the future than there are alive today)[2]. (Impartial concern)[3] most likely implies (we should value their welfare as much as anyone’s)[4]. If (our actions)[5] can predictably (affect future generations in nontrivial ways)[5a], then because the (welfare of so many others would be at stake)[6], (these effects would be what most matter morally about our actions)[7].

Question: Does [1], mean [2]?

Example:

If:

  • we live in a planet where due to technology diseases are going to be rare and threat from existential risks are low,

  • and as a result if continue with the same growth rate 1.07%/year,

then: we will have 20 billion people by 21 billion people! (3 times within 100 years!

The Population of the world from 1804 has increased from 1 billion to 4 billion in < 100 years starting from 1804. If technology continues to

How do you give an example of something you predict in the future?

Question: Does [3], most likely imply [4]?

Example:

I dont’ know how to give an example for deductions and reasoning like the above. or this is a definition

I see “most likely”, I guess all this means is 1 example from my side!

Question: Can [5], do [5a]?

Example: According to here, we see that causing a 2 degree increase within this century can increase the sealevels by 0.9m which poses an increased chance of floods starting with coastal regions around the world, expected to displace 200-300 million people.

Question: if [5] can [5a], then [7] is because of [6].

Example: “Because”, so we skip for now

As there are many billions more lives at stake this implies that this is one of the most important problems.


Breaking down 7

(these effects)[a] would be what (most matter morally)[b] about our (actions)[c]

For A, we think of the rise in sea levels and potential increase in chances of floods. i.e., death of billions of lives and potential failure to continue mankind

For B, we think of billions of lives saved (over a million years)

For C, we think of not doing anything about the rise in temperature, i.e., no change in policies to control rise in temperatures.

If (this)[0] is correct, then (approaches to improving the world)[1] should be evaluated mainly in terms of (their potential long-term impact, over thousands, millions, or even billions of years)[2]. Making these evaluations is part of an emerging field of study called longtermism.

For “[7] is correct”, we think of “saving n number of lives over a billion years” is what matters to us.

Claim: [1] should be evaluated based on [2], if [0]or[7] is true.

Question: Should [1], be evaluated based on [2], given [7] is correct?

Example:

Working on keeping the increase in temperature within 2 degrees within the century could potentially save billions of lives over millions of years, as the earth would still be available for habitation. Instead if we choose to improve the quality of lives of the homeless in the US, we might not end up keeping the temperature within 2 degrees and possibly loose billions of lives.

This one paragraph above took me 2 hrs atleast over multiple days, I think primarily because I didn’t understand [7]. The clue was in properly understanding what 7 meant, hence the breakdown. Your comments? Is the depth at which this example was covered satisfactory? i.e., do we need to know what “potentially save billions of lives over millions of years” is exactly?


It’s difficult to (predict the long-term effects of our actions)[1], but we think it’s clear that (the interests of future generations)[2] are neglected by (most people and institutions today)[3], suggesting there are (untaken opportunities to help)[4]. We also think (some of our actions)[5] do have( very long-term effects)[6] — at the very least we can affect the (probability of existential risks)[7], as covered in the next section, and there may be (other ways to affect the future as well)[8]

Claim: Difficult to predict [1].

Question: Is it Difficult to predict [1]?

Example: One estimate of temperature increase is 4 degrees. The uncertainty lies between 2.4 to 6.4 degrees Celsius. Damages from climate change are proportional to the square of temperature change. So it is difficult to predict what is going to happen a 2 degree increase leading to 9 meters sea level increase or a 6 degree increase which is going to be “much much worse”.

I Googled climate change for a while, but am not able to come up with a proper example. The example, would need to look like this I guess: Look at climate model A,B,C and see the variance in the estimates? or something that shows the uncertainty in prediction of likelihood of a disaster

Claim: [2] is neglected by [3].

Question: Who or What are the [3], that neglect [2].

Example: There are about 100 people worldwide working on “control problem” for AI, so that machines can pursue “realistic human goals safely”. One simple example of the harms of AI could be that AI thinks killing people is a better way to contain a virus from growing. AI could thus affect the future generation.

Claim: [2] neglected by [3] suggests [4].

Question: Is there [4], given [3] neglect [2]?

Example: “Positively shaping AI” has 100 people working it with a budget of 100m$ for something that could “potentially save billions of lives.” Every additional person working here could potentially contribute to saving billions of lives.

I don’t know if the above example makes sense for “given [3] neglect [2]”

Claim: [5] has [6].

Question: What are the [6], of [5]?

Example:

According to here, it takes much greater than 100 years before the greenhouse gases subsude to levels that bring the temperature down by 1 degree even. We are expecting a 6 degree rise with a 10% probability by 2100. The number of people living in water-stressed river basins increases by

Actually I can’t find one spot where I can identify it as a long term impact”. In most cases either the number of lives are not accounted for or the outcome is better with CC than without!

Claim: We can affect [7], other than 7.

Claim: There may be [8].

We remain unsure about (many of these arguments)[1], but overall we’re persuaded that (focusing more on the very long-term effects of our actions)[2] is (one of the most important ways we can do more good)[3]. Such a (radical claim)[4] requires (much more argument)[5], and we outline the (considerations for and against it)[6], as well as (list further reading)[7], in our full article on this topic.

Claim: We are unsure about [1].

Question: Are we unsure about [1]?

Example: I don’t know what they are unsure about and why!

For [1] we think of, “With the benefit of technology and foresight, humanity could survive at least as long as the earth is habitable – a few 100’s of millions of years.”

Claim: We are persuaded that [2] is [3].

Question: Are we persuaded that [2] is [3]?

Example: We can save a few 10’s of people by focusing on homeless people in US, or we can save possibly billion people by focussing on climate change. The later keeps in mind future generations!

Claim: [4] requires much more of [5].

Question: Does [4], require much more of [5]?

Example: There is probability involved. There are no numbers given. is this an example?

Claim: We outline [6], as well as [7] in this article.

Question:

Example:

Need to reflect on the article! no internet access for now!

Moral uncertainty and respecting rights!

As covered, we think that the most important thing for us to focus on from an impartial perspective is increasing the long-term welfare of everyone, such as by helping people have longer, more fulfilling, and happier lives. However, we are not sure that this is the only thing that matters morally.

(Some moral views)[1] that were widely held in the past are regarded as (flawed or even abhorrent today)[1a]. (This)[2] suggests we should expect our (own moral views)[3] to be (flawed in ways that are difficult for us to recognize)[4]. What’s more, there is still (significant moral disagreement)[5] within (society)[6], among contemporary moral philosophers)[7], and, indeed, (within the 80,000 Hours team)[8]. It’s also (extremely difficult)[9] to know all the (ethical implications of our actions)[10], and (grand projects to advance abstract ethical aims)[11] often go badly.

Claim: [1], that were widely held in the past are regarded as [1a].

Question: Were [1] held in the past abhorrent today?

Example: There was a time when blacks were to be seen only as slaves, but now it is widely unpopular to do anything like that!

Claim: [2] suggests [3] to be [4].

Question: Does [2] suggest that [3] is [4]?

This is reasoning. How do you give an example that it suggests [3] is flawed; Skipping this for now!

Claim: [3] could be [4]

Question: Is [3], [4]?

Example: Until a few years back all I was focusing on was Women and not on things like EA which is what I should have always been focusing on. If it were not for an STM who explained to me with countless hours how my moral views were flawed and needed to be oriented towards EA, I am not sure I would have recognized it.

Claim: There is [5] within [6].

Question: Is there [5] within [6]?

Example: Recently Anti-abortion laws were passed in Alabama, Missouri and Georgia. The law said that even in the case of rape the abortion cannot go through. This is a predominantly Republican view and Democrats seem to be completely against that.

Claim: there is [5] within [7].

Question: Is there [5], within [7]?

Example: Can’t find an example for it. Looked at Robin Hanson and Eliezer and also things to do with Peter Singer

Claim: There is [5] within [8].

Not sure I can come up with this without needless research on the whole 80000 website. So I skip this!

Claim: It is [9], to know all [10].

Question: Is it [9], to know all [10]?

Example: I work for a world leader in Lithography and I am able to donate 10% of my income. I think I have saved at the end of the year approximately one life. What I don’t know is how many lives I have taken as a result of promoting this company. For example, this company needs material and sources from the earth, I don’t know where its materials are sourced from and if for example child labor was involved, or poor conditions of work and health leading to reduced life of someone else. Or another example would be the depletion of resources in the end bringing the earth to a stop a few days earlier. I have no way of estimating it and hence it is difficult.

Claim: [11] often goes bad!

Question: Does [11], often go bad?

Split:

For grand projects, we think of Heifer International and their giving of livestock to “people in need”.

For abstract ethical aims, we think of their aim to “improve the lives” of people or “bring them out of poverty”

For goes badly, we think of the lack of evidence from Heifer’s side to prove the claim that : “A cow is better for you than anything else you could buy with what the cow costs”.

Here there are more “grand schemes” that GIVEWELL doesn’t support aka, “go badly”.

Once more a reminder that without evidence you are nothing!


As a result (of (“grand projects going badly”)[1]), we think it’s important to be (modest about our moral views)[2], and in the (rare cases where there’s a tension)[3], try very hard to (avoid actions that seem seriously wrong from a common-sense perspective)[4]. This is both because (such actions)[5] might be (wrong in themselves)[6], and because they seem likely to lead to (worse long-term consequences)[7].

More generally, we aim to factor ‘moral uncertainty’ into our views and to uphold cooperative norms. We do this by taking into account a variety of reasonable ethical perspectives, rather than simply acting in line with a single point of view.

Claim: Due to [1], it is important to be [2].

Question: Why does [1], lead to [2] being important?

Example: because

Claim: It is important to be [2].

Question: Why is it important to be [2]?

Example:

For moral views, we think of “a cow is better for poor people than giving them cash for the cows worth”.

For not modest, we think of Heifer International’s continued claim to support giving of livestock, without demonstrating the effectiveness of such a gift.

For important, we think of 0 funds being delivered to Heifer from GiveWell. And possibly people taking out funds or demanding more evidence from them.

For important I wanted to say something about the wastage of funds as HI spends on something whose effectiveness it doesn’t know, but I don’t have any numbers!

Some mental masturbation perhaps! but whateves. Am I actually enjoying it? I feel happy when I am able to come up with an example as a result of my research! Sometimes I mentally masturbate, procrastinate while looking for info!

Claim: Due to [1], during [3], try very hard to [4].

The English is very troubling. Are they bad writers? or what? How would any one know what they mean by common sense perspective (I guess unless they read 80khours in and out, aka, articles related to it!). With my current example above, it’s not common sense I think to trust nothing but controlled randomized experiments. Period!

In hindsight I think I am working on a summary, and maybe that is the reason I feel like dying! ;)

Example: because!

Claim: During [3], try very hard to [4]!

Question: During [3], should we try very hard for [4]?

Example: No idea! Not gonna break my head over this English! But as I read some related articles on further reading, maybe they are talking about… this whole common sense thing is wierd. I am trying to look in my current commong sense thinking which has been quite upgraded due to an STM

During WWII, there was a lot of tension between nazi Germany and others! And many Germans might be in favor of killing other races. In such cases it would be worthwhile to try very hard not to kill people?

Claim: During [3], try very hard to [4], because [5] might be [6].

Example: because

Claim: During [3], try very hard to [4], because they seem to lead to [7].

Example: because

More generally, we aim to (factor ‘moral uncertainty’ into our views)[1] and to (uphold cooperative norms)[2]. We do this by (taking into account a variety of reasonable ethical perspectives)[3], rather than simply (acting in line with a single point of view)[4].

Claim: We aim to do [1].

Question: Does 80k aim to do [1].

How am I supposed to come up with examples for these? Am I? It’s so fucking hard. What is the goal here? I am not even sure if there is a point to this! Please tell me your highness

Example:

Split: For moral uncertainty I think of my uncertainty in the value of animal lives, in the value of insects, bees, cockroaches.

How do you decide which theory to follow…. total util has the drawback that it will have some really unhappy people

For moral uncertainty, we think of not know if stealing in a particular case is wrong!

Claim: We aim to do [2].

Claim: We do this by [3] instead of [4].

What is unclear?

  • I don’t have an example for [1]. I don’t know how it looks. As a result I don’t know how 80khours does [1] by [3] instead of [4].

    I spent 2-3 hrs on this and still not good. I will move on for now. In parallel I should read about it, find examples about reasoning amidst moral uncertainty.

Claim: We should factor moral uncertainty by using multiple theories instead of just sticking to one.

Example:

We think that a rights framework captures much of what matters in these considerations. So we formulate the one-sentence version of our views as: promoting long-term welfare while respecting the rights of others.

These are some of the reasons that we think it’s so important to respect the rights of others at the same time as aiming to promote long-term welfare.

Skip! Lite!

Global Priorities

Now that we have a sense of (what ‘making a difference’ means)[1], we can ask (which career options make a difference most effectively)[2].

Claim: We have a sense of [1].

Question: Do we have a sense of [1]?

Example: 80khours promotes working on climate change primarily because of the risks it poses to the lives of future generations. One estimate suggests 300 million people might need to be displaced. To put that in perspective, we know that displacing 6 million people from Syria, was already quite hard for the refugees as many countries were not accepting refugees.

Claim: Knowing [1], leads to asking [2].

Question: Does knowing [1], lead to asking [2]?

Example: Working as a management Consultant can earn money and not do much for the 300 million people about to be displaced in 2100, where as becoming president of US, you could change policies to bring the temperature increase within controllable limits and promote clean technology.

We think that probably the most important single factor that determines the (expected impact of your work)[1] is the (issue you choose to focus on — whether that’s climate change, education, technological development or something else)[2].

Rant: I guess they are trying to segue into their page which talks about different issues. It’s been a few hours of having worked on this statement, but, I still don’t get it. Don’t get what? I am not able to wrap my head around why the issue you focus on most important? Like Why? As an STM oftens frets about to me, “measuring anything but the actual outcome you are interested in is all hogwash!”

Now Having said that, disprove the motherfucker!

Claim: [2] is most important factor, which determines [1].

Question: Is, [2] really the most important factor when it is judged based on [1].

Example:

Whether you choose climate change or homelessness or AI, makes quite a difference in your delivered impact. One can save about a few hundreds of thousands, the other can save about 100’s of people, and the last one potentially, millions.

OK! But what about the probability of actually making an impact at said field, Isn’t that an important factor too? If we work in ‘making AI safe’ it is estimated below that the impact is about 6.3 million people. But what are the odds of me being hired by MIRI because I don’t have the skill. In my case, I would think 1%. In that case 1% of 6.3 million is 63000.

In this case atleast, personal fit doesn’t seem to make a big difference. So it looks like ‘the issue’ could indeed be a very important factor! Oops!

But none of this actually means anything, unless I factor this with say my capabilities or my probability of succeeding.

Moreover, what I am trying to say is the expected impact of my work 100% depends on the expected impact of my work only. Just because I work in CC or AIsafety doesn’t automatically catapult me. There are several other factors like personal fit (i.e., my probability of success). Just looking at one seems to be foolishness as with poor personal fit, you could suck so bad that it might not be worth your starting!

To conclude, the expected impact of my work, is dependent, I don’t know how to say if it is the most important factor. As one factor without the other seems pointless!

Based on my example, 80khours seems to have been seriously exposed. Now what! Can you comment on this?

To be re-written

It’s harder to have (a big impact on commonly supported causes)[1] because (work in most areas has diminishing marginal returns)[2]. In other words, if (an area already receives plenty of attention)[3], then (there will usually already be people working on the most promising interventions)[4].

Claim: It’s harder to have [1].

Question: Is it harder to have [1]?

Example: If you look at making AI safe, there are 100 people working on it, barely common. If we assume that the current chances of making AI safe is 0.1%, then we are able to save say 7 million people effectively. 80khours claims if an additional 100 people work on it, the chances of making it is 1% more, aka 70 million lives could be saved. Per person added we seem to have about 6.3million in effective people saved on average. Talk about BIG!

If you look at Climate Change, there is a 10% chance of 7 billion people being affected. i.e., effectively 700 million are in danger. Assuming that there are only 700 people working on CC—which is probably not accurate at all considering the 10 billion $ in funding—the effective possible people saved is 1 million (highly conservative estimate).

Claim: Its’ harder to have [1], because of [2].

because!

Claim: If [3], then [4].

Question: If [3], then __?

Example:

For [3], we think of Climate Change(CC), it receives 10b $ in funding.

For [4], we think of the Paris Climate Agreement whose whole aim is to keep temperature increase “well below” 2 degrees enforced by 194 “states” spanning 88% of the Green house emissions.

Although we’d like to see more people working on many global problems, we think (additional people)[1] can have the most impact by focusing on the (issues that are most neglected)[2] relative to (the magnitude of the stakes)[3] and (the number of promising opportunities to make progress)[4].

Claim: [1] can have the most impact by working on [2], in relation to [3].

Question: Can [1], have the most impact by working on [2], in relation to [3]?

Split:

For [2], we think of making AI safe with only 100 people working on it. For comparison we also think of institutional decision making with only 100 to 1k people working on it. Or for that matter we look into Global Priorities Research which also seems to have similar funding of 5m-10m $ for it’s spending.

For [3], we think of expected savings/person.

  AI Decision making Priorities
Total value of solving 1% 0.5%* 0.5%*
Doubling effort solves 1% 1% 1%
Neglectedness 7 8 9
num of additional people 100 500* 500?
expected people saved/person** 7k 700 700

/* - averaged value between upper and lower limit given in 80khours ? - “educated” guess /**- expected savings based on assumption that total damage would be

And without such clarity in numbers how the fuck do you even begin to compare different interventions! I wish 80khours used same numbers across everywhere and not vague terms like “improves value of future by 1%

Example:

In order to the make the most impact one would choose AI (7k people), which seems to be based on “most neglected issue” and “the magnitude of the stakes”.

Claim: [1] can have the most impact by working on [2], by focusing on ‘[3] and [4]’.

Question: Can [1], have the most impact by working on [2], focusing on ‘[3] and [4]’?

Split:

For [2], we think of making AI safe, institutional decision making and global priorities research

For [3], we have seen above

For [4], number of opportunities, for me (I assume), which seem to be the same for all issues below:

  AI Decision Making Priorities
Policy N N N
Earning to give Y Y Y
Research/PhD N N N
Work at some grant giver N N N
non-academic research N N N
complementary roles M - N

Considering [3], and [4], it appears that [4] does not seem to add value in the above case. It appears that the claim is true but [4], doens’t seem to add any value.

Am feeling different, I am ready to work on this late even, usually it was like start and get over with. When I start I feel like this is my thing, This is pandians turf! I feel like its my basketball, I know the smell, I am ready, I feel like a guy who PUA’s every single day! # not afraid!

So, what are the most neglected and solvable issues that have the biggest stakes for long-term welfare?

Current View

In the 1950s, the (large-scale production of nuclear weapons)[1] meant that, (for the first time, a few world leaders gained the ability to kill hundreds of millions of people)[2] — and possibly many more if they triggered (a nuclear winter)[2a], which would make it nearly impossible to grow crops for several years)[3]. Since then, the possibility of (runaway climate change)[4] has joined the list of (catastrophic risks facing humanity)[5].

Claim: [1] meant [2].

Question: Does [1] mean [2]?

Example:

For [1], we think of the sudden rise in capacity of the US nuclear weapons from 2 to 299 from 1945 to 1950.

For [2]: One nuclear weapon can destroy hundreds of thousands of people in a “major city”. With 299 weapons the could kill up to 2 million people.

I am unable to find the number of people possible to wound with one nuclear weapon in 1950’s.

Claim: [2a] results in many more deaths

Question: Does [2a] result in many more deaths

Example:

The smallest nuclear powers today have 50 Hiroshima-sized nuclear weapons, which when used could cause the temperature of earth to rise several degrees over large areas in North America and Eurasia, including most of the grain-growing regions for more than a year. Resulting in famines and starvation and possible death.

Source.

Yes not many numbers are given but I am unable to go deeper, as this already took an hr!

As far as I am reading there seem to be no nuclear winters in the past+


Claim: Since 1950 [4] has joined [5].

Question: Since when did [4], join [5]?

Example: By late 1960’s the number of scientific papers published on the Climate Change leapt from roughly 3 to 20 papers year. —source

During the next century we may develop (new transformative technologies, such as advanced artificial intelligence and bioengineering)[1], that could bring about a (radically better future)[2] — but may also (pose grave risks)[3].

Claim: We may develop [1].

Question: Are we going to develop [1]?

Example: An AI system named AlphgGo, in the course of the year, it became the best player in the world with a 60-win streak. Why this is impressive(transformative) is because it is not possible to win the game with brute force, but only with strategic intuition, which the AI developed. — Source

Claim: [1] can cause [2].

Question: Can [1], cause [2]? or How many people can AI save?

Example: Assuming everything can be cured, and that it is just a matter of time and understanding, AI could cure Cancer which killed 9.6 million people in 2017.

Claim: [1] can cause [3].

Question: Can [1], cause [3]?

Example: In an autonomous assignment to reduce the incidence of Cancer, an AI decides to kill human beings before they can grow old to develop cancer as this was the only way to drive cancer rates to 0.

Previously, we focused on improving (near-term global health)[1], and we still think it’s (an important cause)[2]. However, over the past eight years, we’ve come to realise that (these technological developments)[5] mean that the (actions of the present generation)[3] may put the (entire future of civilization)[4] at stake.

Claim: In the past 80khours focussed on [1].

Claim: [1], is still [2].

Question: Is [1], important?

Example: Every year around 10 million people in poorer countries die of curable illnesses that can be very cheaply prevented (100 $ to 1000 $ per year) including malaria and HIV.

Claim: [5] implies, [3] may result in [4].

Question:

Example: Technologies such as AI could produce grave issues, if not enough effort is put into it (for example, in the control problem). In the future if AI, is much more powerful and intelligent than a human brain, it could choose to simply eliminate rival intellects aka existential risk.

In combination with our (growing confidence in longtermism)[1], this has persuaded us that the most important challenge of the next century is likely to be to (reduce ‘existential risk’ — events that would drastically damage the long-term future of humanity.)[2]

Claim: [2] is the most important challenge.

Question: Is [2] the most important challenge?

Example: AI could potentially wipe out the entire population and the future generations, if not “enough work” is done on the ‘control problem’.


There are several types of (existential risk)[1]. Currently, we’re most concerned by the risk of (global catastrophes)[2] that might lead to (billions of deaths)[3] and (threaten to permanently end civilization)[4]. There are several reasons we think it’s overwhelmingly (important to address these risks)[5].

Claim: There are several types of [1].

Question: What are the several types of [1]?

Example: I don’t know what they are, can’t find something online related this as well

Claim: 80khours is currently, concerned by [2] that might lead to [3] and [4].

Question: Is 80k hours concerned by [2]? that might lead to [3] and [4].

Example: 80khours scores the issues due to climate change as 14/16, as compared to “improving health in poor countries” scored at 13/16.

Claim**: [2] might lead to [3] and [4].

Claim: There are several reasons for [5].

Question: Are there several reasons for [5]?

Example: These are written below…

First, because of the (power of the new technologies)[1] noted above, we think that the (probability of this kind of catastrophe occurring in our lifetime)[2] is too big to ignore.

Claim: [2] is too big to ignore because of [1].

because

Claim: [2] is too big to ignore.

Question: Is [2], too big to ignore?

Example:

For [2], we think of a 5% chance of a super intelligent AI ending in human extinction i.e., a loss of billions of lives multiplied by 5% resulting in expected loss of 70m people (not counting the future generations)

Second, it seems like (such an event)[3] would be among the (worst things that could happen)[4]. This is especially true if one takes a (longtermist perspective)[5], because (extinction)[6] would also mean the (loss of the potential welfare of all future generations)[7].

Claim: [3] would be [4].

Question: Would [3] be [4]?

Example: As seen in the previous example, a loss of 70m (not taking into account the future generations) is expected.

Claim: It is especially true if you take into account [5].

Example: With a 5% chance and a 11 billion population, we already have an expected loss of 110m people. If we consider future generations, this number goes upwards only based on the expected population and the time they have on earth. For example if we assume a population of 20b by 2200 without the catastrophe, then we shall have an expected loss of 9b and 110m people.


Third, (some of these risks)[1] are highly neglected. For instance, the fields of (AI safety and catastrophic biorisk)[0] receive the attention of perhaps only 100 dedicated researchers and policymakers, compared to the (billions or trillions of dollars)[2] that (go into more familiar priorities)[3], such as (international development)[4], (poverty relief in rich countries)[5], (education, and technological development)[6]. This makes them perhaps more than (a factor of 1000 more neglected)[7].

Claim: [1] are highly neglected.

Example: AI safety receives the attention of 100 dedicated researchers, for a probability of extinction of 5% by 2100.

Claim: [2] is spent on [3].

Question: The US spent 600 billion$ on military in 2015

Claim: [2] is spent on [4]

Example: US spends 5.7 billion in foreign aid for Afganistan

Claim: [2] is spent on [5]

Example: 45m people are in poverty receiving aid from US and living in the US. 18k $/person is being spent by the US government which is in total 810 billion $.

Claim: [2] is spent on [6].

Example: “Total expenditures for public elementary and secondary schools in the United States in 2014–15 amounted to $668 billion”. — Source

Claim: This makes [1] a factor of 1000 more neglected.

Question: How neglected does this make [1]?

Example: 10m $ is spent on AI safety risk whereas 668b $ spent on education. 668b/10m=66000

This neglect suggests that a (comparatively small number of additional people working on these risks)[1] could significantly reduce them. We suggest specific ways to help in the next section.

Example:

If we double the number of people working in AI safety, we can reduce the risk by 1%. Which amounts to about 70 millions effective lives for an extra 100 people.

This said, we remain (uncertain about this picture)[1]. (Many of the ‘crucial considerations’ that led us to our current priorities)[2] were only (recently identified and written about)[3]. We may yet learn of (other ways to increase the probability of a positive long-term future and reduce the chance of widespread future suffering)[4], that seem (more promising to address than the existential risks we currently focus on)[6].

Claim: 80khours is [1].

Question: Is 80khours [1]?

Example: 80khours emphsizes on working on “global priorities research”, to identify what we should work on AI safety more or climate change more (for example).

Claim: [2] were only [3].

Question: Was [2], [3]?

Example: 80khours wrote this article on AI safety on April 2015.

Claim: there may be [4] than [6].

Example: It could turn out that focusing on green energy excessively is the way to go to for the future rather than identifying or predicting the temperature rise, as it will always have a ton of uncertainty.

I can only give a hypothetical example here! right?

For these reasons, we also work to support those creating the (new academic field of global priorities research)[1], which (draws on economics, philosophy and other disciplines)[2] to work out (what’s most crucial for the long-term future)[3].

Claim: 80khours supports GPR.

Example: 80khours provides its research on which of the issues are most relevant.

  AI Safety Nuclear security
scale (16) 15 15
neglectedness(12) 8 3
solvability (8) 4 3

Claim: GPR “draws” on economics, philosophy and other disciplines to determine [3].

Split:

For GPR, we think of 80khours suggesting new people to work on AI safety instead of Climate Change, which will result in saving many more lives over the long-term-future.

For economics, we think of 80khours intention to look at factors like scale, neglectedness and solvability.

factors AI Safety Climate Change
scale (16) 15 14
Neglectedness(12) 8 2
Solvability (8) 4 4

For Philosophy we think of, 80khours and their focus to look at long-term-future instead of near-term-future as the former will save a lot more number of people.

For other disciplines, we think of 80khours looking into Climate Change and AI to understand the values of the different factors

Example: 80khours suggests that working on AI safety is better than working on Climate Change based on the fact that the neglectedness is very high, and the amount of people going to die as a result would be large.


200 starts here!

Work starts here. Finish 40-50 phrases today

In addition, we encourage people to (work on ‘capacity-building’ measures)[1] that will (help humanity manage future challenges, whatever those turn out to be)[2]. These measures could involve (improving institutional decision making and building the ‘effective altruism’ community.)[3]

Claim: It is good for people to [1] which will [2].

Question: Is it good for people to [1] which will [2]?

Split:

For [1], we think of Niel Bowerman, who is seen to work in different organizations (CEA) in the role of fund-raising and growing organizations,

For [2] we think of, Niel Bowerman being able to jump right in and work in addressing the talent gap for AI safety.

For good, we think of working in AI safety because it has the potential to take down the entire world and yet has only 100 people working on it.


Claim: [1] could involve [3].

Question: Does [1] involve [3]?

Split:

For [1], we think of Niel Bowerman working for 80khours in the role of bringing in more people into AI safety

For building the EA community, we think of the same example as in [1].

For institutional decision making, I don’t have an example

Some other issues we’ve focused on in the past include (ending factory farming)[1] and improving (health in poor countries)[2]. They seem especially promising if you don’t think (people can or should focus on the long-term effects of their actions)[3].

Claim: 80khours focused on [1] and [2] in the past.

Question: Has 80khours focused on [1] and [2] in the past?

Example: 80khours wrote articles on supporting factory farming and global poverty since 2009. But recently they call AI safety and other existential risks as top-problems and not FF and GP.

Claim: [1] seem promising if you don’t think [3].

Question: Is [1], promising if you don’t think [3]?

Example:

50 billion animals die each year. 1k people are working on it. “Expected value with intense efforts for the future of humanity” is 0.05% (average), i.e., 0.0005*7 billion human lives i.e., 3.5m expected human lives. Assuming that doubling the effort leads to reducing the problem by 1%, we have,

3.5*E6 expected people lives * 1% / 1000 people
= 35 expected people lives in total

Contrast this, to working in Data Science at Google in the US, I expect 400 lives to be saved.

So, does not look promising!

Claim: [2] seems promising if you don’t think of [3].

Question: Is [2], promising if your don’t think [3]?

Example:

If one works in GiveWell, he can probably have an impact of 97k$ per year. This implies he can save 97k$/4k$*30=727 lives in total of 30 years. Contrast this to working in Data Science in Google in the US, about 400 lives can be saved over a 30 years.

There are (many issues)[1] we haven’t been able to look into yet, so we expect there are other (high-impact areas we haven’t listed)[3]. We have a (list of candidates)[4] on our (problem profile page)[5], and we’d be excited for (people to explore some of these as well as other areas that could have a large effect on the long-term future.)[6] (These areas)[6a] can be (particularly worth pursuing)[7] if you’re (especially motivated by one of them)[8]. We cover this more in the section on ‘personal fit’ below.

Claim: There are [1], that 80khours has not looked into yet.

Question: Are there [1], that 80khours has not looked into yet?

Example: Criminal Justice Reform, medical research into how to slow aging etc…

Claim: There could be other [3].

Question: Could there be other [3]?

In this case, I could give a hypothetical example or an example from the past? can you help with what’s good here? and why?

Example from Past: Until a few years back 80khours thought that the best places to work on were “reducing near-term life risks aka reducing global health risks” but when they explored that there were global catastrophic risks that could kill the entire planet and future generations, they have now changed their stance on where people should be working considering the impact.

Example Hypothetical: If medical research into ‘how to slow aging’ seems largely promising (95% chance of making it with 10b $ with a 100 people extra), in delivering a mechanism that doubles the human life expectancy, it could be beneficial to work on it as it could save 95% * 7b expected people lives/100 = 66m expected people lives per person working on it

Claim: 80khours has [4] on [5].

Example: They have “individual cognition” and many others in this page: https://80000hours.org/problem-profiles/

Claim: It’s a good idea for [6].

Question: Is it a good idea for [6]?

Example:

Working in DS gives an impact over 30 years of life of:

  • 75% chance of working in US starting with 150k$ for 30 years starting at 35 years
  • growth of 5% average until 50 and then 2% average growth till 65
  • 10% increase every 5 years
  • Donating 35% of salary

Results in saving 530 people. Previously I said 400, now I have an updated calculation.

Instead if I get into “promoting effective altruism” and work on my “people convincing skills” and convert only 10 people who would not have donated to donate similar amounts as in a DS career, then it appears that it could result in saving 5300 people. Of course this needs to be multiplied by the probability of this actually happening which could be as low as 10% to match the success of a career in DS saving 530 people.

Claim: These areas can be [7], if you’re [8].

Question: Is [6a], [7], if you are [8].

Split:

For [6a] we think of, working in promoting EA as in the above example.

For [8], we think of a personal fit of more than 50%

For [7], we think of an impact of 5300*50%=2650 lives which is better than working a DS job resulting in 530 net people.


Which careers effectively contribute to solving these problems

The (most effective careers)[1] are those that address the (most pressing bottlenecks to progress)[2] on (the most pressing global problems)[3].

Claim: [1] are those that address [2] on [3].

Question: Is [1], [2] on [3]?

Split:

For [1], we think of a career in AI safety, say as a computer science researcher in MIRI, with an impact of 57k people(derived below) saved per additional person. Contrast this to the 530 people to be saved over a career in DS.

It seems to be making sense finally why an STM thought donating to MIRI was better than donating to GiveWell.

For [2], we think of the control problem in AI

For [3], we think of AI safety

Derivation for 57k | | AI safety | Climate Change | |——————————————|———–|—————-| | Possible Deaths at the end of 2100 | 21b | 20% x 21b | | % of chance (middle of given range) | 5.5% | 5.25% | | people involved | 100 | 1000 (guess) | | Double effort => X% reduction in risk | 1% | 50%* | | Multiply everything above | 57,750 | 55,125 | | Money involved (minimum) $ | 10m | 10b | | Dividing by above | 5.7E-3 | 5.5E-6 |

*here double effort is assumed to mean “major effort” cited in their article

For the (same reasons)[1] we think it’s an advantage to work on (neglected problems)[2], we also think it’s an advantage to take (neglected approaches to those problems)[3]. We discuss some of these approaches in this section.

Claim: Due to [1], it is good to work on [2].

because

Claim: It is advantageous to work on [2].

Question: Is it advantageous to work on [2]?

Example:

As shown above, we see that the lives saved per person per dollar is much better for AI safety, aka a factor of 1000 better than working in Cimate Change which is not “so neglected”(aka, 10b $ in funding)

Claim: It is good to work on [3].

Question: Why is it advantageous to work on [3]?

Example:

MIRI sends out a mail on Christmas saying that they didn’t meet their funding goals by a few 100k $. Let alone adding another 100 people to solve the most important problem (complete anhilation by 2100 with a 1-10% chance) by 1 more percent, what about keeping the people currently involved and trying to grow the movement?

For [3], we think of ‘adding more people’ being the most neglected approach as there are only 100 people working on it currently and adding another 100 will only reduce the problem by 1%.

For advantageous, we think of 57k lives (as above) for every additional person added to AI safety (on average).

Last two hours


Given our take on (the world’s most pressing problems)[1] and the (most pressing bottlenecks these issues face)[2], we think the following (five broad categories of career)[3] are a good place to (start generating ideas)[4] if (you have the flexibility to consider a new career path)[5].

Claim: Given [1] and [2], it appears that following [3] is a good place for [4].

I am not sure how to give an example Given [1] and [2]. So I skip this for now.

Claim: It appears that following [3], is a good place for [4].

Question: Is following [3], a good place for [4], if [5]?

Split:

For [3], we think of a career in researching Climate Change

For [4], we think of Niel Bowerman meeting ‘Giving What we can’ which led him to go into Earning to Give in Finance, and then slowly transitioning from there to FHI and then into AI policy with 80khours.

For [5], we think of Niel being able to move to finance for earning to give.

Example: Niel Bowerman started his career in researching climate change where he realized that he should probably earn to give, and moved into a career path in that direction and in the end landed with AI policy at 80000 hours. As we have seen in the past AI safety » Climate Change aka “good”.

research

(Many of the top problem areas we focus on)[1] are mainly (constrained by a need for additional research)[2], and we’ve argued that (research)[3] seems like a high-impact path in general)[4].

Claim: [1] are mainly [2].

Question: Is [1], mainly [2]?

Example: There are 100 people working on AI safety, an additional 100 people will reduce the risk by 1%.

Claim: [3] seems like [4].

Example: Working in MIRI as a researcher could save 57k lives and has a bang-for-the-buck as compared to Climate change (about 1000 times better).

I don’t know what general means so I skip it!

(Following this path)[8] usually means (pursuing graduate study in a relevant area where you have good personal fit)[5], then aiming to do (research relevant to a top problem area)[6], or else (supporting other researchers who are doing this)[7].

Claim: [8] usually means, [5].

Example: >50% of them at MIRI seem to have a graduate degree or a PhD.

Claim: [8] usually means [6] or [7].

Example: For me top problem area is Climate change or AI safety. None of the team of MIRI seem to have worked or done any research in Climate change or AI safety before joining MIRI, or even supporting them in some way.

I hereby confirm [8] doesn’t seem to mean [6] or [7].


(Research)[1] is the (most difficult to enter of the five categories)[2], but it has (big potential upsides)[3], and in (some disciplines)[4], going to (graduate school)[5] gives you (useful career capital for the other four categories)[6]. This is one reason why if (you might be a good fit for a research career)[7], it’s often a good path to start with though we still usually (recommend exploring other options for 1-2 years before starting a PhD)[8] unless (you’re highly confident you want to spend your career doing research in a particular area)[9]).

Claim: [1] is [2].

Example: I am positive MIRI does not want me with my current skill set. I probably need to work atleast 5 years (magic number), before I can come to the level of their research. Whereas I could already earn-to-give to MIRI as small as the amount may be.

Claim: [1] has [3].

Example: An additional worker in places like MIRI has an impact of 57k people. This is by far the highest I have ever seen in terms of impact. If you look at earning to give for the most money making job I know, aka Investment Banking, you could save 3771 lives at max (not including the personal fit).

Claim: In [4], going to [5], gives you [6].

Example: Jesse Liptrap from MIRI, finished his PhD in Math and was able to work as SWE in Google (allowing him to earn-to-give). He currently works at MIRI.

Claim: If [7], it might be good to start directly with [1].

Split:

For [7], we think of Jesse Liptrap having atleast 3 papers on his name

For ‘it might be good to [1]’, we think of Jesse having finished his PhD being able to work in Google (with the possibility of earning to give) and in the end still able to come back to research.

Claim: It is better to do [8] unless [9].

I guess the point of 80k is: To explore and try other things before joining PhD, as once you finish your PhD and leave academia to explore, coming back is hard. I was unable to find real life examples of “how hard it is” or who these people were.

After your (PhD)[1], it’s hard to (re-enter academia if you leave)[2], so at this stage if (you’re still in doubt)[3] it’s often best to (continue within academia)[4] (although this is less true in (certain disciplines, like machine learning, where much of the most cutting-edge research is done in industry)[5]). Eventually, however, it may well be best to do (research in non-profits, corporations, governments and think tanks instead of academia)[6], since (this can sometimes let you focus more on the most practically relevant issues and might suit you better)[7].

Claim: After [1], its hard to [2]

Was not able to find an example online for someone who came back to academia and how “hard” it was for him

Skipped the whole para, it is taking a lot of time to find examples (an hour or more)

Claim: if [3], better to [4]

Claim: if [3], better to [4], unless [5].

Claim: It is better to work in [6], since [7]

Claim: it is better to work in [6].

You can also (support the work of other researchers)[1] in a (complementary role, such as a project manager, executive assistant, fundraiser or operations)[2]. We’ve argued (these roles)[3] are often neglected, and therefore especially high-impact. It’s often useful to have (graduate training in the relevant area)[4] before taking these roles.

Claim: It is good to [1] in [2].

Example: As discussed earlier, AI safety is really quite neglected with 100 people working on it with 10m $. Neil Bowerman from 80khours is trying to add people required to fill the “talent gaps”. If Neil is able to add 10 more people and even claim 1% of their total impact that would be 570 lives saved just for his work in a few years. Contrast that to a DS job which saves 400 people

I think it is important to contrast it with something otherwise it is hard for someone to understand if it is good or bad. Agree: to always contrast?

Claim: [3] is often neglected

Example: As of 2017 only 100 people are working. Adding another hudred people would reduce the risk by only 1%. The risk associated is a 5% chance of world extinction by 2100.

Claim: [3] is high impact

Example: As shown above 1 extra person in the field of AI can on average save 57k people. If Neil is able to add 10 more people and even claim 1% of their total impact that would be 570 lives saved just for his work in a few years.

Claim: [3] is neglected and hence it is high impact.

Example: AI is neglected whereas Climate Change is not. A person working in AI seems to have 1000 times more impact than a person working for Climate Change.

  AI safety Climate Change
Possible Deaths at the end of 2100 21b 20% x 21b
% of chance (middle of given range) 5.5% 5.25%
people involved 100 1000 (guess)
Double effort => X% reduction in risk 1% 50%*
Multiply everything above 57,750 55,125
Money involved (minimum) $ 10m 10b
People saved per $ per person (Dividing by above) 5.7E-3 5.5E-6

Claim: It is useful to have [4] before [3].

Split:

For [4] before [3]: Neil Bowerman has a PhD (equivalent) in Physics, where he worked on existential risks of extreme climate change with a focus on providing emission targets.

Also Sean O hEigeartaigh, from CSER has a PhD in Genome Evolution, he is also known to increase the number of people at FHI and secure rougly 3m $ in funding. Now he is completely in operations such as grantwriting, fundraising, long-term planning etc…

Not sure how “useful” [4] is before [3]

(Some especially relevant areas to study)[1] include (not in order and not an exhaustive list): (machine learning, neuroscience, statistics, economics / international relations / security studies / political science / public policy, synthetic biology / bioengineering / genetic engineering, China studies, and decision psychology)[2]. (See more on the question of what to study.)

Claim: [1] is [2].

Not sure how to satisfy the claim’s “relevance” with an example. I can imagine how it looks though: A did Machine Learning PhD and it helped because of X in top problem. I am unable to connect B and the top problem with an example. aka the same inability to answer the previous claim’s “usefulness”


Working at effective non-profits

Although we suspect (many non-profits)[1] don’t have (much impact)[2], there are still (many great non-profits)[3] addressing (pressing global issues)[4], and they’re sometimes constrained by a (lack of talent)[5], which can make them a (high-impact option)[6].

Claim: [1] don’t have [2].

Example: Many non-profits like Grameen Foundation fail to show data of their success and in some cases such as the ‘Village phone program’ seem to have been evaluated as having no impact on the trading activity which it was supposed to boost.—GiveWell

Claim: [3] addresses [4].

Example: MIRI addresses research regarding AI safety

Claim: [3] constrained by [5].

Example:

For [3], we think of MIRI.

For [5], we think of the Open Philanthropy project ready to pay a mean value of 3m $, to add a person immediately to places like MIRI, OpenAI. When the salary for a MIRI engineer would be 200k$ max I assume.

Claim: [3] constrained by [5], is [6].

Example: Every additional person added to AI safety(MIRI, OpenAI) will have on average an impact of 57k lives.

One major advantage of (non-profits)[1] is that (they can tackle)[1a] the (issues that get most neglected by other actors)[2], such as (addressing market failures)[3], (carrying out research that doesn’t earn academic prestige)[4], or doing (political advocacy on behalf of disempowered groups such as animals or future generations)[5].

Claim: [1] can tackle [2] such as [4].

Split:

For [1], we think of GiveWell

For [2], we think of not knowing where to donate our money as we have no idea of the effectiveness of the charity.

For [4], we think of a post by GiveWell, where they tear down some of the popular non-profits like Grameen and expose how much they suck.

For [1a], we think of GiveWell being able to move 110m $ in 2015 to organizations it deemed effective.

Claim: [1] can tackle [2], such as [3].

Split:

For [1], we think of 80khours

For [2], we think of AI Safety with only 100 people working on it for a 5% chance of human extinction by the end of this century.

For [3], we think of 80khours addressing lack of people in AI safety with Neil Bowerman.

For [1a], we think of 80khours deploying Neil Bowerman to identify and fill up the talent gaps and create talent pipelines to ensure there are more people working on AI safety.

Claim: [1] can tackle [2], such as [5].

Split:

For [1], we think of Animal Equality

For [2], we think of 50 billion animals being killed every year “most of them” experience “extreme levels of suffering” (like castration without anesthesia or antibiotics)—Source

For [5], We think of Animal Equality advocating for animal rights for animals in US, India etc..

For [1a], we think of Animal Equality saving 3k to 8k animals for every 1k $ of donations.


(To focus on this category)[0], start by making a list of (non-profits)[1] that address (the top problem areas)[2], (have a large scale solution to that problem)[3], and (are well run)[4]. Then, (consider any job where you might have great personal fit)[5].

Claim: Make a list of [1] with [2], [3] and [4]; and then do [5] for [0].

For list of [1], with [2], [3], [4], we think of

  • MIRI working on AI safety, is working on solving the control problem with research, and has enough funding for this year for their 15 staff members

  • 80khours works on Global Priorities Research, they provide research for all people to read to help them make “good choices” in their career, and has enough funding for this year.

For 5, we think of:

  • Let’s say I have a personal fit of 1% for MIRI and 1% for 80khours.

For 0, we think of maximum value of [personal fit multiplied by impact]:

  • Every additional person to MIRI has an impact of 57k people. With a 1% personal fit, I would be at 570 people saved.

  • By working in 80khours similar to the position of Neil Bowerman, if I add 50 people to AI safety and assume a 1% impact from them, and a personal fit of 1%, we have 57000*50*1%*1%=285 people

Just looking at personal fit seems to be not enough, we should also look at impact multiplied by it.

The (top non-profits in an area)[5] are often (very difficult to enter)[6], but you can always (expand your search to consider a wider range of organizations)[7]. (These roles)[8] also cover a (wide variety of skills, including outreach, management, operations, research, and others.)[9]

Claim: [5] is often [6].

Example: If you look at the people working in MIRI, a research fellow is expected to have published research in computer science, logic and mathematics. This is extremely hard for me due to my lack of background, and it sounds like 5 years of full time work before I reach that level.

Claim: [7] is a solution to [5] being [6].

Example: It looks like 80khours means to look at non-top non-profits such as those working on ‘health in poor countries’ or ‘animal rights’. I would imagine this should take less than 5 years of part time work, to get into GiveWell.

Claim: [8] covers [9].

Example: Working at GiveWell would mean doing research on effectiveness of interventions and writing blogs.

We list some (organizations to consider)[10] on (our job board)[11], which includes (some top picks)[12] as well as (an expanded list at the bottom)[13]. Read more about working at effective non-profits in our full career review (which is unfortunately somewhat out of date).

Claim: [10] is listed in [11].

Example: MIRI is on the job board.

Claim: [11] include [12] as well as [13].

Example: Job board includes MIRI, as well as GiveDirectly.

Apply an unusual strength to a needed niche

If you already have a strong existing skill set, is there a way to apply that to one of the key problems?

If (there’s any option)[13] in which you (might excel)[14], it’s usually worth considering, both for the (potential impact)[15] and especially for the (career capital)[16]; (excellence in one field)[17] can often give you (opportunities in others)[18].

Claim: if [13] in which you [14], it is worth considering for [15].

Example:

If Messi (soccer player worth 400m $) works in ML and say somehow joins MIRI, he can save 57k people. If Messi instead donates 5m $ and satisfies MIRI’s budget, he is essentially sponsoring 15 people who have on average 57k people impact. If Messi assumes a 20% of the total impact of MIRI, this comes to about 15*57000*0.2=171k people.

Claim:

This is even more likely if you’re (part of a community that’s coordinating or working in a small field)[119]. (Communities)[20] tend to need a (small number of experts)[21] covering each of their (main bases)[22].

I gave up at this point! too painful, barely going forward! Quantifying impacts and giving examples is quite slow and really hard (1 claim per 45mins). So I stop here.


AI


There is no doubting the (force of the arguments)[1] the problem is a (research challenge worthy of the next generation’s best mathematical talent)[2]. (Human civilization)[3] is at stake.

Claim: There is no doubting [1].

Question: Why is there no doubting [1]?

Split: For [1], we think of, 5% chance for human extinction due to AI by 2100.

Example: The fate of Gorillas currently depends on the actions of humans. Similarly the fate of humanity may come to depend more on the actions of machines than our own.

This is reasoning and not an example I think, your thoughts? or I should just give a hypothetical example?

Imagine Russia has an autonomous weapon system, that works without human intervention. If the weapon detects a threat it is going to engage and bomb the hell out of who ever it thinks did this. If the AI makes a mistake at any time, it still continues to bomb the hell out of who ever it thinks did it, resulting in war.

Claim: Problem is [2].

Example: MIRI was founded in 2000. And in 2017 80khours says that adding another 100 people will only solve 1% of the problem.

Claim: [3] is at stake.

Example:

The fate of Gorillas currently depends on the actions of humans. They are currently endangered. Similarly the fate of humanity may come to depend on the actions of machines than our own.

Around 1800, (civilization)[4] underwent (one of the most profound shifts in human history: the industrial revolution)[5].

Claim: Around 1800, [4], underwent [5].

Example: Around 1800, inventions such as the steam engine fueled transportation using horses or a boat went on to railroads, steam boats and automobiles.

(This)[6] wasn’t the (first such event)[7] – (the agricultural revolution)[] had upended (human lives 12,000 years earlier)[].

Claim: [6] wasn’t [7].

Example: The agricultural revolution 12000 years earlier, allowed humans to produce enough food for themselves. This shows up only in the 1700s with the population rise from 5.5m to 9 million in Britain. It does not show up earlier due to diseases ad warfare apparently.

(A growing number of experts)[8] believe that (a third revolution will occur during the 21st century, through the invention of machines with intelligence which far surpasses our own)[9]. These range from (Stephen Hawking to Stuart Russell, the author of the best-selling AI textbook, AI: A Modern Approach)[10].

Claim: [8] believe [9].

Example: Stephen Hawking says here that “full development of an AI” will spell the end of the world.

i guess this is not an example!

Claim: [10] are part of [8].

Example: An Open letter was signed by Stephen hawking, Stuart Russel and many others in 2015 stating concerns over the issues with AI.

(Rapid progress in machine learning)[1] has (raised the prospect that algorithms will one day be able to do most or all of the mental tasks currently performed by humans)[2]. (This)[3] could ultimately lead to (machines that are much better at these tasks than humans)[4].

Claim: [1] has [2].

Example: In 2000, “roomba” could autonomously vacuum the floor by avoiding obstacles. Today, AlphaGo AI, can beat the greatest Go players with just a year of learning.

Claim: [3]/[1] could lead to [4].

Example: Today, AlphaGo AI, can beat the greatest Go players with just a year of learning.

(These advances)[5] could lead to (extremely positive developments, presenting solutions to now-intractable global problems)[6], but they also pose (severe risks)[7]. (Humanity’s superior intelligence)[8] is pretty much the sole reason that (it is the dominant species on the planet)[9]. If (machines surpass humans in intelligence)[10], then just as the fate of gorillas currently depends on the actions of humans, the (fate of humanity may come to depend more on the actions of machines than our own)[1].

Claim: [5]/[3]/[1] could lead to [6].

Example:

AlphaGo, identified superior ways of playing GO which were previously considered rubbish by humans for thousands of years. Computers seem like they can go beyond what humans can see with years and years of work, within just a year. Similarly, it could be possible to cure cancer and other diseases.

how to do I give an example for “could lead to”? I don’t think I have given one above!”

Claim: [5] also poses [7].

Example: With making of autonomous Weapons or autonomous combat Bots, the risk of cyber attack by an adversary or malfunction, could result in attack on people or escalate conflicts by killing the unintended.

Claim: [8] is [9].

Example: Humans are capable of making tools like spheres to be able to protect themselves from large predators, whilst traveling always as a group of people. Whereas Zebras even though they travel in large packs, have no way of resisting a few lions targeting 100 zebras. There will be casualty.

Claim: if [10] then [1].

I have no idea how to answer this claim, how do I give an example that will inform if A then B.


For a (technical explanation of the risks from the perspective of computer scientists)[1a], see these papers (concrete problems in AI, long-term challenges ensuring the safety of AI)[2].

Claim: [1a] is found in [2].

Example:

For [1a] in [2] we think of, “Imagine that an agent discovers a buffer overflow in its reward function: it may then use this to get extremely high reward in an unintended way. From the agent’s point of view, this is not a bug, but simply how the environment works, and is thus a valid strategy like any other for achieving reward. For example, if our cleaning robot is set up to earn reward for not seeing any messes, it might simply close its eyes rather than ever cleaning anything up. Or if the robot is rewarded for cleaning messes, it may intentionally create work so it can earn more reward.”—Source

(This)[3] might be the (most important transition of the next century)[4] – either ushering in an (unprecedented era of wealth and progress)[5], or (heralding disaster)[6]. But it’s also (an area that’s highly neglected)[7]: while (billions)[8] are (spent making AI more powerful)[8a], we estimate (fewer than 100 people)[9] in the world are working on (how to make AI safe)[10].

Claim: [3] might be [4] either going into [5] or [6].

Split:

For [3], we think of a world where machines are more intelligent than human beings.

Example:

There is a chance that we could go into extinction (for example, as a result of autonomous warbots being compromised leading into war) and if not it could be curing diseases/problems like Cancer or global warming. The outcomes are on both extremes.

Claim: AI safety is [7].

Example: Only 100 people are working with 10m $ in funding. Contrast this to the funding obtained by one single organization working on curing Malaria: AMF for this year got 40m $.

Claim: [8] is spent on [8a]

Example: It appears that billions of dollars are going to be spent on making virtual assistants, chatbots, recognize images, process human speech, identify anamolies in CT scans, identify cracks in jet engine blades etc… NOT ON AI SAFETY.—source

Claim: [9] working on [10].

Example: There seem to be 12 organizations working on the problem of AI safety. All seem to be non-profits of a small scale so I would imagine 15 people max per organization roughly amounts to 180 people (approximately in the ballpark).

(This problem)[1] is an (unusual one)[2], and it took us a (long time)[3] to (really understand it)[4]. Does it (sound weird)[5]? Definitely. When (we first encountered these ideas in 2009)[5a] we (were skeptical)[6]. But (like many others)[7], (the more we read the more concerned we became)[8]. We’ve also come to believe the (technical challenge)[9] can probably be (overcome if humanity puts in the effort)[10].

Claim: [1] is [2].

Example: I have never heard about this on the news/media like Climate Change. I didn’t even look up twice despite an STM donating 4k $..

Claim: It took 80khours [3] to [4].

Example: 80khours seems to have articles related to improving global poverty since 2011, but regarding AI articles are made only since 2017 despite encountering it in 2009.

Claim: It sounds weird.

Example: No idea how to answer this, probably not important as well.

Claim: When [5a] we [6].

Example: 80khours didn’t bother to publish an article until 2017 from 2009 to 2017 on AI safety

Claim: Many others understood the risks by [8].

Example:

I have seen the TED talk before in 2017 December, only now am I truly warming up to AI safety (possibly because I made the world class assumption that all EAO’s have exactly the same impact as GiveWell). I never really saw the need to give money to MIRI until a week back. Recently I started off with the “key ideas post” by 80khours and started taking apart the phrases when I realized the impact (57k people per extra person working on it). Additionally it helped to see several scientists giving a voice for AI safety here. Furthermore, it helped to make concrete the risks such as with the autonomous weapons potential to destabilize nations.

Claim: [9] can be [10].

Example: Here, UN has requested a ban on development of autonomous weapons. If all countries come to an agreement on this, it could potentially save us from extinction as a result of autonomous weapons.

(Working on a newly recognized problem)[1] means that (you risk throwing yourself at an issue that never materializes)[2] or (is solved easily)[3] – but (it)[] also means that you may have a (bigger impact by pioneering an area others have yet to properly appreciate)[4], just like (many of the highest impact people in history have done)[5].

Claim: [1] means [2].

Example: “Earlier this year, the U.S. defense think-tank Rand Corporation warned in a study that the use of AI in military applications could give rise to a nuclear war by 2040.”—Source

Seems like the claim could be wrong.

Claim: [1] means [3].

Example: I am not sure what they are getting at, they are basically suggesting all possible scenarios aka, you will see AI materialize or you wont see it materialize! Sounds useless to me.

Claim: You may have [4].

Example: There are only 100 people working in AI safety with a calculated 57k people to be saved if one additional person works on it, on average.

Claim: you may have [4], just like [5].

Split:

For [4], we think of working in AI safety and saving 57k people.

For [5], Gandhiji, seems to have brought India Independence, by pioneering in the are of Non-violence, which others were yet to properly appreciate. (unable to estimate the impact aka, number of lives saved)

TIO Summary

(Talent)[1] we imagine is something that (people)[2] are born with. (Talent)[3] certainly seems to be (overrated)[4] especially when (it refuses to show itself even after many many years into the lives of exceptional musicians.)[5a]

Claim: [2] is born with [1] to become GREAT.

Question: Is [2], born with [1] to become GREAT?

Example: Jerry Rice, known as the greatest receiver in history— whose stats in total touchdown receptions are 50% higher than the runner up—was signed to the San Francisco 49ers after 15 teams passed him over.

Claim appears to be false.

Claim: [3] is [4] since [5].

Example: Jerry Rice, known as the greatest receiver in history— whose stats in total touchdown receptions are 50% higher than the runner up—was signed to the San Francisco 49ers after 15 teams passed him over. AKA, It doens’t look like his “in-born-talent” didn’t want to show itself 15-20 years later.

In a study of outstanding American pianists, for example, you could not have predicted their eventual high level of achievement even after they’d been training intensively for six years;

A standard argument that comes at any such (number of studies)[1] presented is, (“But what about Mozart, and what about Tiger Woods?”)[2]

Claim: People say [2] when [1] is presented.

Example: I am unable to provide an example for this

There seems to be an (explanation)[5] for these so called (anomalies)[6]. In both the case of (Mozart and Tiger woods their fathers)[7] seem to be starting them off (quite early in their lives)[8] and have spent quite some time building the (skill into their children)[9]. In the case of (Mozart his father)[10] was a (highly accomplished pedagogue)[11] and in the case of (Tiger Woods, his father)[12] played (golf quite well)[13] and was (extremely passionate about it)[14] and (was also a teacher)[15].

Claim: There is a [5], for [6].

Example:

For [5], we think of Tiger’s father who was top 10% of gold players himself and was a teacher and dedicated his life to teach Tiger Woods from the age of 7 months.

For [6], we think of Tiger Woods with the most number of PGA tour wins (and still playing), where as 99% of people who gold don’t even play professionally, let alone win a title.

Claim: [7] started their children at [8].

Example: Tiger’s father started him off at 7 months.

Claim: [7] have spent quite some time building [9]

Example: Tiger’s father started Tiger off with a metal club and a putter at 7 months. By the age of 2 they are at the golf course playing and practicing regularly. By age 4, he is learning from a professional coach.

Claim: [10] was [11]

Example: Wolfgang’s father wrote a book on violin instruction that remained influential for decades. I don’t thing this is a good enough example.

Claim: [12] played [13].

Example: Tiger Woods father was among the top 10% of the players with a couple of years of starting it.

Claim: [12] was [14].

Example: Tiger Woods father was among the top 10% of the players with a couple of years of starting it. He wanted to teach his son asap.

Claim: [12] was [15].

Example: He coached Little League teams and took them to state tournaments in baseball.

(The question about talent)[1] is answered (in the fact that Mozart’s first piece regarded today as a masterpiece was composed when he was 21.)[2] Although it is (an early age)[3], it must be taken into account that (the boy)[4] has been in preparation since (very very young)[5]. In an attempt to compare (how Mozart fares with his current contemporaries)[5a], Scientists created a ‘(precocity index)[6]’. This roughly measures (how much better someone is compared to the average)[7]. (Mozart)[8] scored a (130 percent on the precocity index)[9] whereas (his current contemporaries)[10] scored (thirty to five-hundred percent)[10a]. (This)[11] is probably due to the (improved methods in teaching and learning)[12].

Claim: Talent is inborn due to 2.

because

Claim: 21 is [3].

Example:

“George Grove, the founding editor of “Grove’s Dictionary of Music and Musicians” has called Mendelssohn’s “Midsummer Night’s Dream” overture, Op. 21 “the greatest marvel of early maturity that the world has ever seen in music.” This work was completed by Mendelssohn on August 6, 1826 when Mendelssohn was 17 years and 6 months old.”—Source

For people known as GREATs’, 21 doesn’t seem to be very early.

Claim: [4] has been in preparation since [5].

Example: Mozart’s dad started him on a program of intensive training at the age of three.

Claim: Scientists created [6] for [5a]?

Example: It looks here, that precocity index is used much before the paper that is cited above, about the precocity index of musicians.

It seems like it was not created for comparing Mozart to his contemporaries.

Claim: [6] measures [7], “roughly”.

Example: Mozart has a precocity index of 130%, which is nothing but based on a “simple formula”:

-X/(Y-X)

X- Number of years of preparation before publicly playing a piece for average person Y- Number of years of preparation before publicly playing a piece for Mozart

Claim: [8] scored [9].

Example: All this probably requires is a citation?

Claim: [10] scored [10a]

Example: All this probably requires is a citation?

Claim: [11] is probably due to [12].

because

In Tiger’s case (his father)[13] never really claimed any (inborn talent)[14], but he thought that the (boy seemed to grasp things)[15] rather quickly. And (both of them)[1] state (Hard Work for the Success of Tiger)[2].

Claim: [13] never claimed [14].

Example: A quick google search of “inborn talent tiger woods” does not come up with any news articles or media where [13] states [14].

Claim: [15] was rather quick.

I don’t know how I can find an example for that.

Claim: [1] state [2].

Example:

“People don’t understand that when I grew up, I was never the most talented. I was never the biggest. I was never the fastest. I certainly was never the strongest. The only thing I had was my work ethic, and that’s been what has gotten me this far.—Tiger Woods

If you look at (Jack Welsh, CEO General Electric)[1], one of the (twentieth century’s manager of the century)[2] apparently showed no (inclination towards business until his mid-twenties)[3]. He started working in (chemical development operation at GE around that time)[3a]. And until that point there seems to be (nothing)[4] indicating the (business tycoon that he was going to become)[5]. Talent Waar ben jij?

Claim: [1] is [2].

Example:

“Jack Welch is a celebrated, legendary CEO. In his two decades at the helm of General Electric, he grew revenues to $130 billion from $25 billion and profit to $15 billion from $1.5 billion.”—Source

Claim: [1] showed no [3].

Example: By the age of 25 he seems to have finished his masters and PhD in Chemical engineering. He was even looking for jobs as a faculty in universities like West Virginia before he joined GE.

Claim: [1] was working in [3a].

Example: Its a question of fact. I guess I just cite a source. https://en.wikipedia.org/wiki/Jack_Welch

Claim: Until his mid-twenties, there was [4], indicating the [5].

Example:

By the age of 25 he seems to have finished his masters and PhD in Chemical engineering. He was even looking for jobs as a faculty in universities like West Virginia before he joined GE.

If talent existed and (refused to show itself even after so many years of life)[6], it beckons if (inate ability)[7] (talent) even exists.

Claim: If [7] exists and it [6], then [7] doesn’t exist

Example: unable to give examples for this if-then/proof-type statements

Maybe (talent)[8] seems like it doesn’t exist, but surely (intelligence)[9] and (memory power)[10] should have a high influence. Spoiler Alert! (Nope)[11].

Claim: [8] seems to not exist

Example: By the age of 25, Jack Welch, the ‘manager of the century’ didn’t even begin doing anything related to business and was considering working as a faculty in universities before he joined GE in Chemical Engineering.

Claim: [9] has high influence on Greatness/Success

Example: In a study of 45 thousand salesmen, whose IQ was pitted against their Sales ratings, it appears that intelligence showed a correlation of 0.04 with objective sales, whereas Achievement (Striving for competence in ones work) showed a correlation of 0.4 with objective sales.

Source

So, Absolutely NOT!

Claim: [10] has high influence on Greatness/Success

Example:

“A study with highly skilled chess players and non-experts in chess was done where all were shown real chess game positions of 25 pieces for 5-10 seconds. The chess masters were able to recall the position of every single piece, whereas the non-experts were able to recall 4 or 5 pieces. As expected. This was followed up with random placement of chess pieces and the same 5-10 seconds to remember each piece. The chess masters and the non-experts pretty much ended up with the same results.”— from Agent18’s blog

So, Absolutely Not!

A study was conducted in the business realm. (Salesmen)[12] were an (attractive subject for this study)[13] as it is rather clear to measure (output/success)[14]. (More number of sales)[14a] implies (more success)[14b]. (The study)[15] was the largest of its kind containing (data of several dozen studies amounting to 45k individuals)[16]. Because of such a large number the (endless sources of noise)[17] are expected to be drowned. (The bosses)[18] gave (good indication of the IQ of the person with their ratings)[19], and with the help of (sales they actually made)[20], (the results)[21] were compiled.

Claim: [12] are [13].

Example: As a Design Engineer the contribution in terms of numbers ($ contributed to my company) is highly unclear. I guess as a result we have vague criteria for determining our impact such as “how I did my work in a year rated from 1-3” and “what I did rated from 1-3”. One day I work on a verification procedure, another day I work on some stage design which takes 2 years to make, whose value is not yet known. Where as in Sales, its OK/NOK. You either sold 5 bulbs or you didn’t.

Claim: [12] are [13] as it is clear to measure [14].

Example: You either sold 5 bulbs or you didn’t.

Claim: [14a] implies [14b]

Example: If you sold ‘n’ tables for X $. If you sold ‘2n’ tables then you make 2X $.

Claim: [15] was the largest of its kind

Example: There are other papers with sample sizes ranging from 11 to 16k. This study had almost 46k large sample. (It was actually a combination of several samples from different papers.)

Claim: [15] contained [16].

Example: [15] contained samples from studies with sample sizes from 11 to 16k.

Claim: With 45k samples, [17] is expected to be drowned

Example: This is a hard one, need to spend a lot of time on understanding randomness and come up with examples! Skip for now! I have no idea of examples for 17 in the context of the sales people, nor do I know why it is drowned or have an example for it.

Claim: [18] have [19].

Example: The bosses rated their staff on their performance and it turns out that they have 0.4 correlation with IQ. The bosses ratings were also correlated with Achievement but with 0.2 correlation. It looks like bosses have a better vision on IQ than anything else.

Source

Claim: [20] was used as a outcome

Example: “Interest appears to be a strong predictor of sales (0.3 correlation)”.

(Intelligence)[22] was (virtually useless in predicting how well a salesperson would perform)[23]. Whatever it is that makes (a sales ace)[24], it seems to be something other than (brainpower)[25].

Claim: [22] was [23].

Example: 0.04 correlation between General Cognitive ability and objective sales

Claim: [25] is not useful for [24]

Example: 0.04 correlation between General Cognitive ability and objective sales

(Another investigation on real world performance)[1] was with (betting of horses)[2]. (The goal)[3] was to forecast (post-time odds)[4]. Based on (this)[5] the (classification of experts and non-experts)[6] was done. (Both groups)[7] seem to have (not much differences)[8] in terms of (experience at the track)[9], (years of formal education)[10], and (even the IQ averages and variation)[11]. Further investigation suggested that (IQ’s)[12] didn’t help (predict if someone was going to be good or bad at this)[13]. (A person with IQ of 85 (“dull normal”))[14] was able to (pick out the top horse in 10/10 races)[15]. And (a non-expert with IQ 118)[16] (picked up the top horse for 3/10 cases)[17]. There are a (dozen factors)[18] that go into deciding the (outcome of the game)[18], like (how the horse fared in the last game)[19], (track condition)[20] etc… Apparently the (low-IQ-experts)[21] used (far complex models that took a wide consideration of multiple variables)[22] unlike (the high-IQ-non-experts)[23].

To work this out, it looks like I need the original paper. But I can’t find a readable copy of it. So I skip this for now.

And this doesn’t stop here. (The same traits)[1] are observed with (Chess, GO and even scrabble)[2]. “Scrabble users show below average results on tests of verbal ability.”, And some Chess grand masters have IQ that are below Normal. All in all,

Claim: [1] is observed with [2].

Example: “Scrabble users show below average results on tests of verbal ability.”

(IQ)[2] seems to be a (decent predictor of performance)[3] on an (unfamiliar task)[4], but (once a person has been at it for a few years)[5], (IQ)[6] predicts (little or nothing about performance)[7].

Claim: [2] seems to be [3] on [4].

Example: I don’t have an example

Claim: [2] seems to not be [3] on [5].

Example: Chess grand masters have IQ that are below normal.

Claim: [2], predicts [7].

Example: Chess grand masters have IQ that are below normal.

but what about memory?

The Czech master Richard Reti once played twenty-nine blindfolded games simultaneously. Miguel Najdorf, a Polish-Argentinean grand master, played forty-five blindfolded games simultaneously in Sao Paulo in 1947;

Surely (this)[1] is a (sign of the Divine)[2], right? Surprise, surprise! A study with highly skilled chess players and non-experts in chess was done where all were shown real chess game positions of 25 pieces for 5-10 seconds. The chess masters were able to recall the position of every single piece, whereas the non-experts were able to recall 4 or 5 pieces. As expected. This was followed up with random placement of chess pieces and the same 5-10 seconds to remember each piece. The chess masters and the non-experts pretty much ended up with the same results.

Claim: [1] is not [2].

Example: Despite Chess players seeming to have great memory (Richard Reti playing 29 blind-folded games), they still suck as bad as non-experts as they can only recall 4 or 5 pieces when the chess pieces are placed at random.

(The chess masters)[3] did not (have incredible memories)[4]. What they had was an (incredible ability to remember real chess positions)[5].

Claim: [3] did not have [4].

Example: When chess experts were asked to recall pieces placed in random on a chess board, they sucked as much as the non-experts.

Claim: [3] had [5].

Example:

“A study with highly skilled chess players and non-experts in chess was done where all were shown real chess game positions of 25 pieces for 5-10 seconds. The chess masters were able to recall the position of every single piece, whereas the non-experts were able to recall 4 or 5 pieces. As expected. This was followed up with random placement of chess pieces and the same 5-10 seconds to remember each piece. The chess masters and the non-experts pretty much ended up with the same results.”

(Experts remembered about 5-9 chunks of information at a time on the chess board)[6] that allowed them to (recall the positions of the pieces)[7]. The same was observed with GO and Gomuku even.

Claim: [6] allowed them to do [7].

Example: Experts could recall only 5-9 pieces when the chess coins were placed at random. But were able to recall the entire board for real chess positions.

(Many decades of research)[8] have shown that (average short-term memory)[9] holds (only about seven items)[10]. (The capacity of short-term memory)[11] doesn’t seem to vary much from person to person; virtually (everyone’s short-term memory)[12] falls in the range of (five to nine items)[13].

Claim: [8] has shown [9] holds [10].

Example: The main article cited 29k times was written in 1956 and it is still being cited to this day i.e., 7 decades.

Claim: [9] holds [10].

Example: People who do not have years of experience in music, are able to identify 7+-2 tones with a number corresponding to 1 tone.

Claim: [11] does not vary much from person to person

Example: experts and non experts in chess were able to identify 5-9 randomly placed pieces.

Claim: [12] falls in the range of [13].

Example: People who do not have years of experience in music, are able to identify 7+-2 tones with 1 number corresponding to 1 tone.


As reflected later in the book (TIO, Chap 6), (remembering 49 games at once)[14] is still a (ginormous feat)[15] (not possible with this short term memory)[16]. More on this later.

Up until now it might seem that (we)[17] are just unstoppable forces who can all become (legends)[18]. But certainly there are (limitations)[19]. There are (physical limitations to achievement)[20] such as Death and diseases, (limitations related to age)[21], (personal dimensions)[22] etc… It appears that other than (physical limitations)[23], there is not really (clearly understood or proven non-physical inate abilities inhibiting our potential to success)[24].

the goal

The goal could be to understand where I could work, what type of work I could do.

https://80000hours.org/problem-profiles/positively-shaping-artificial-intelligence/#top

Devour the article!

https://80000hours.org/job-board/ai-ml-safety-research/

http://www.paulgraham.com/selfindulgence.html

how about some stuff from here:

http://agent18.github.io/deliberate-practice.html morgen freeman!

http://pradeep90.github.io/Deep-Thinking.html

Peter Singers morality essay

k-fold

If (K is small)[1] in a (K-fold cross validation)[2] is the (bias in the estimate of out-of-sample (test set) accuracy)[3] smaller or bigger? If (K is small)[4] is the( variance in the estimate of out-of-sample (test set) accuracy smaller or bigger.)[5] Is K large or small in leave one out cross validation?

For [1], we think of k=3

For [2], we think of the following:

  • Divide data set into 3 parts.

  • Take the first part as test and the rest as training

  • perform say linear regression with all variables and obtain coefficients

  • Compute Accuracy on test dataset

  • Do this for every part and compute average and variance!

Need to show actual code of example!

For 3,

smaller or bigger than what?

lets do pg article on saving money instead of earning more and what

he means

From 21 june

By (donating to the most effective organizations in an area)[1], just about (anyone in a well paid job)[2] can have a (substantial impact)[3].

Claim: [2] can have [3] by [1].

For [1], we think of donating to AMF (which is certified by GiveWell as the most effective charity)

For [2], We think of being the top 1% in the world in terms of earnings(a salary allowing a 10% donation with ease of 4k$).

For [3], we think of saving one life with 4k$ of donation. This will come to about 200 lives with increase in salaries over 30 years.

By donating to AMF (certified by GiveWell as most effective charity),

You may be able to (take this a step further)[4] and ‘earn to give’ by (aiming to earn more than you would have done otherwise and to donate some of this surplus effectively)[5].

Not (everyone)[6] wants (to make a dramatic career change)[7], or is (well-suited to the narrow range of jobs that have the most impact on the most pressing global problems)[8]. However, by donating, (anyone)[9] can (support these top priorities, ‘convert’ their labour into labour working on the most pressing issues, and have a much bigger impact)[10].

Summary

(Many experts)[9] believe that there is a (significant chance that humanity will develop machines more intelligent than ourselves during the 21st century)[10]. (This)[] could lead to (large, rapid improvements in human welfare)[11], but there are (good reasons)[1] to think that (it could also lead to disastrous outcomes)[2]. The problem of (how one might design a highly intelligent machine to pursue realistic human goals safely)[3] is (very poorly understood)[4]. If (AI research continues to advance without enough work going into the research problem of controlling such machines)[5], (catastrophic accidents)[6] are much more likely to occur. Despite (growing recognition of this challenge)[7], (fewer than 100 people worldwide)[8] are directly working on the problem.

Claim: The problem of [3] is poorly understood.

Example:

For [3], we think of using Machine Learning algorithms that do not show the probability values i.e., “airplane”, instead of “99% airplane and 1% cat”. This aims to not allow hackers to train for adversary examples as shown here.

For [4], we think of the black-box adversary, which is able to trick Google and Amazon ML models, such that they get it’s adversarial examples wrong by 96% and 88%.

Claim: If [5], then [6] are more likely.

Example: The blackbox adversary is able to trick Google and Amazon ML models such that they get 96% and 88% of supplied adversarial examples wrong.

Claim: Despite [7], [8] are directly working on this problem.

Example:

For [7], MIRI was founded in 2000 and OpenAI was started in 2015

For [8], we think of the 15 organizations that work on AI safety with say 12 people each resulting in roughly 180 people worldwide.

[8] is directly working on a problem. Is easy to give example. How does “despite [7]…” be given an example.

The (arguments for working on this problem area)[1] are complex, and what follows is only (a brief summary)[2].

??? come back

Superintelligence: Paths, Strategies, Dangers, by Oxford Professor Nick Bostrom. The Artificial Intelligence Revolution, a post by Tim Urban at Wait But Why, is shorter and also good (and also see this response).

When Tim Urban started investigating his article on this topic, he expected to finish it in a few days. Instead he spent weeks reading everything he could, because, he says, “it hit me pretty quickly that what’s happening in the world of AI is not just an important topic, but by far the most important topic for our future.”

skipping this as this is some personal account. There are claims like, “he started doing this…”, “he realized it takes more time…”, I am not sure it is useful to break my head over trying to give examples for these.

In October 2015 an AI system named AlphaGo shocked the world by defeating a professional at the ancient Chinese board game of Go for the first time. A mere five months later, a second shock followed: AlphaGo had bested one of the world’s top Go professionals, winning 4 matches out of 5. Seven months later, the same program had further improved, crushing the world’s top players in a 60-win streak. In the span of a year, AI had advanced from being too weak to win a single match against the worst human professionals, to being impossible for even the best players in the world to defeat.

no claims here other than claims of fact. So I skip this!

(This)[1] was shocking because (Go)[2] is considered (far harder for a machine to play than Chess)[3]. (The number of possible moves in Go)[4] is (vast)[5], so it’s not possible to (work out the best move through “brute force”.)[6] Rather, the game requires (strategic intuition)[7]. (Some experts)[8] thought it would take at least (a decade for Go to be conquered)[9].

Claim: [1] was shocking

Example: “Some experts thought it would take at least a decade”, but it took less than a year.

Is this an accepted example

Claim: [1] was shocking because [2] is [3].

Claim: [2] is [3].

Example: Chess starts with 16 different positions playable at start, where as Go starts with 361 positions at the start.

Claim: [4] is [5],

Example: 361 positions at the start as compared to 16 positions for chess.

Claim: [4] is [5], so it is not possible to [6].

Example: “The search space in Go is vast – more than a googol times larger than chess (a number greater than there are atoms in the universe!). As a result, traditional “brute force” AI methods – which construct a search tree over all possible sequences of moves – don’t have a chance in Go. “—Source.

Claim: The game requires [7].

Example:

  • due to search space being more than googol, it is not possible to use brute force.

  • It uses neural networks, and machine learning from over 30 million moves from the past, and it eventually became the best Go player in the world.

Claim: [8] thought it would take at least a decade, but it took less than a year to be the best.

Example: I only quotes, no one says who said it. so SKIP!

Since then, (AlphaGo)[10] has discovered that (certain ways of playing Go that humans had dismissed as foolish for thousands of years were actually superior.)[11] Ke Jie, the top ranked go player in the world, has been astonished: “after (humanity)[14] spent (thousands of years improving our tactics)[15],” he said, “computers tell us that humans are completely wrong. I would go as far as to say not a single (human)[13] has touched (the edge of the truth of Go)[12].”9

Here I need to dive deep into Go, Although not useful, it could still help taking down an unknown topic aka via research.

Claim: [10] has discovered [11].

Example: “Master made moves that seemed foolish but inevitably led to victory this week over the world’s reigning Go champion, Ke Jie of China”— source.

Claim: [14] has spent [15].

Example:

Claim: [13] has touched [12].

Example: A game being played for 2000 years

don’t know what it means,

(The advances above)[1] became possible due to (progress in an AI technique called “deep learning”)[2]. In the (past)[3], we had to give computers (detailed instructions for every task)[4]. Today, we have (programs that teach themselves how to achieve a goal)[5] – for example, a program was able to learn how to play Atari games based only on reward feedback from the score. This has been made possible by (improved algorithms)[6], (faster processors)[7], (bigger data sets)[8], and (huge investments by companies like Google)[9]. It has led to (amazing advances far faster than expected)[10].

Claim: [1] became possible due to [2].

Example: Winning reliably took AlphaGo training with 30 million moves from games played by experts until predictability went to 57%. And thousands of games between it’s neural networks, to improve itself gradually over time.

I dont’ think it answers it!

Claim: In [3], we gave computers [4].

Example: In video games, bots are written based on rules, if the opponent has reached position X, bot A will charge at him and shoot at X frequency with it’s aim at 50%.

Whereas today, AlphaZero can learn and play chess like a “master” in 4 hours.

Claim: We have [5], today.

Example: AlphaGoZero is able to learn the entire game of GO without any human intervention to the level of AlphaGo which is the world’s best Go player currently.

Claim: We have [5], today because of [6].

Example: I don’t know how to give an example for this? I don’t know where to find such an example. I expect the example to look like: We used alg A and B and B had superior performance… Alg A allowed had so much performance and this drawback, Alg b in 2019 doesn’t have those.

Claim: We have [5], today because of [7].

Example: same as above

Claim: We have [5], today because of [8].

Example: because and same as above

Claim: We have [5], today because of [9].

Claim: We have [5], today because of [10].

Claim: [6],[7],[8],[9] has led to [8].

skip

Plan change

  • Will only deal with top claims that are useful for me. Wtf does that mean?

    I am not going to look into lines like:

    “this has been made possible by imrpoved algorithms, faster processors”. I’d rather take this at face value, I don’t see how they are helping me. Contrast this to “It has led to Amazing advances far faster than expected”, I care about this, because it gives me concreteness in understanding how “fast the problem is progressing” ok Great! I think it’s still a feeling, atleast we have two examples that “differentiate them”…

    Let’s make all the claims and skip them or go further. Sounds good!

  • When to skip:

    • because due
    • too much work for a useless non-useful stuff
    • too easy stuff
    • claims of fact

And that my friend is your feedback, the feedback of truly understanding, not doing random stuff and reexplaining the same joke over and over again.

Don’t seem to like browsing on the internet hoping for some results to match.

If you can’t open the source move on! Don’t waste time, thinking you are DPing!

Summary

Write all claims, skip useless stuff, focus on useful stuff that you are confused about that you feel you need to learn more about.

plan in action

But (those)[1] are just (games)[2]. Is general (machine intelligence)[3] still far away? Maybe, but maybe not. It is really hard to (predict the future of technology)[4], and (lots of past attempts)[5] have been (completely off the mark)[6]. However, (the best available surveys of experts)[7] assign (a significant probability to the development of powerful AI within our lifetimes)[8].

Claim: [1] are just [2].

skip

Claim: [3] is still far away

Claim: Maybe but maybe not

skip I feel like puking at these statements that just seem to simply waste your time. Is this bad writing? what are the dimensions bro? exactly.

Claim: It is really hard to [4].

Example: “Some experts thought it would take at least a decade for Go to be conquered”, but it has already arrived and it is the best player in the world. — source

Claim: [5] about [4], has been [6].

Example: Can’t open the financial times source, so don’t know by how far they missed the mark.

Claim: [7], assigns [8].

Example: Of the 29 people who answered the survey, more than half thought that there was a greater than 50% chance of high-level machine intelligence by 2050. and 10% chance of it happening by 2024.

(One survey of the 100 most-cited living computer science researchers, of whom 29 responded)[], found that (more than half thought there was a greater than 50% chance of “high-level machine intelligence” – one that can carry out most human professions at least as well as a typical human – being created by 2050, and a greater than 10% chance of it happening by 2024)[] (see figure below).2 [10]

skip. as they are questions of fact

Impacts

If the (experts are right)[1], (an AI system that reaches and then exceeds human capabilities)[2] could have (very large impacts)[3], both (positive)[4] and (negative)[5]. If (AI matures in fields such as mathematical or scientific research)[6], (these systems could make rapid progress in curing diseases or engineering robots to serve human needs.)[7]

Claim: if [1], [2] could have [3].

skip as it is covered in the next claims

Claim: [2] could have [3] which is +ve

Example: Humans can miss the signs of cancer 20-30% of the cases (fuck me!) of cancer. We seem to be talking about 3.6m people who are probably going to die. (I assume that if you leave cancer undetected you are going to be fucked). If AI is able to detect with high accuracy, it could lead to attempting to save these 3.6m lives.

But I don’t think this is what they are talking about in 6 and 7. I currently have only a hypothetical example but I think I am still missing a bit of detail.

Claim: [2] could have [3] which is -ve

Example:

*“The owners of a pharmaceutical company use machine learning algorithms to rapidly generate and evaluate new organic compounds.

As the algorithms improve in capability, it becomes increasingly impractical to keep humans involved in the algorithms’ work – and the humans’ ideas are usually worse anyway. As a result, the system is granted more and more autonomy in designing and running experiments on new compounds.

Eventually the algorithms are assigned the goal of “reducing the incidence of cancer,” and offer up a compound that initial tests show is highly effective at preventing cancer. Several years pass, and the drug comes into universal usage as a cancer preventative…

…until one day, years down the line, a molecular clock embedded in the compound causes it to produce a potent toxin that suddenly kills anyone with trace amounts of the substance in their bodies.

It turns out the algorithm had found that the compound that was most effective at driving cancer rates to 0 was one that killed humans before they could grow old enough to develop cancer. The system also predicted that its drug would only achieve this goal if it were widely used, so it combined the toxin with a helpful drug that would incentivize the drug’s widespread adoption.”*

18m people die of Cancer every year.

Claim: If [6], then [7].

Example:

On the other hand, (many people)[1] worry about the (disruptive social effects of this kind of machine intelligence)[2], and in particular its (capacity to take over jobs previously done by less skilled workers)[3]. If the (economy is unable to create new jobs for these people quickly enough)[4], there will be (widespread unemployment and falling wages)[5].11 (These outcomes)[6] could be avoided through (government policy)[7], but doing so would likely require (significant planning)[8].

Claim: [1] worry about [2].

Example: 80khours is trying to place many people in AI policy to … ???

Claim: [2] is [3].

Example: Google assistant is already able to make appointments for you by speaking like a human.

Claim: [1] worry about [3].

Example: 80khours hours???

Claim: If [4], there will be [5].

Example: ???

Claim: [6] could be avoided through [7], but would require [8].

Example: ???

Rant

Lot of OB, moving walking, not deepworking. Feeling bored! and dragging myself to get success. There used to be times in the past when I would look at the time and it ouwld be [2] hrs. Nowadays [1] hr is already hard. and the idea of being done with this shit is very soothing.

Reading deepwork? or write an article about mathivanan? also I don’t know wha I am doing anymore as I just seem to be spending time

What is rewarding, when I find a great example which I feel like I understand something that an STM would respect, but that takes a lot of time or reading and googling articles.

  • need to read articles

  • getting lost to find what exaxtly I would like to find

    Statistics

Date phrases/hr claims/hr actual claims/hr Comments
17-05-2019 12 7 -  
18-05-2019 10 4 -  
19-05-2019   1    
20-05-2019   1    
21-05-2019 3 2    
22-05-2019 5 3    
23-05-2019 2 2    
24-05-2019 4 2    
25-05-2019 - -    
26-05-2019 10 7   Good, did proper one hr
27-05-2019 2 1   Quite hard, was work
        on the next phrase
28-05-2019 3 1   T’was hard!
29-05-2019   5 0 0 worked out!
30-05-2019   0   failed
31-05-2019   0   tried hard, had to read
01-06-2019 2      
02-06-2019        
03-06-2019 3 2   ok! last example fine
04-06-2019 4 2   good day! repeat 1!
05-06-2019 3 1    
06-06-2019 3 1   repeated the same!
07-07-2019 2 1    
08-07-2019 2 1    
09-07-2019       failed
10-07-2019 30 17   [5] hrs
11-07-2019        
12-07-2019 3 2    
13-07-2019       failed
14-07-2019 3 2    
15-07-2019 32(0.5/m) 20   [6] hrs!
16-07-2019 27(1/m) 16   6
17-07-2019 60(1.3/m 39   [6] hrs but my article
        on DP
18-08-2019 15 +20 [7]+[14]   [3].5hrs+ [2].5
        80khours art + mijn
19-09-2019 20+25     80khours AI [5].5
20-09-2019 1     [1] hr AI
21-09-2019 4 2   [2] hrs
22-09-2019       Did [2] hrs
23-09-2019       Did [3]-[4] hrs
New plan        
24-09-2019 8-5 6-3    
         

I am dreaming most of the time! I don’t have a deadline or some focus! I think. I am rarely able to do this. I am thinking about the life in India! This should be painful not boring! And I think it is boring and the very second the clock ticks 58 to 60 mins pandian is out!

Need to finish 10 phrases today period!

Letter to an STM

Thalaiva,

If You were me, how would you be spending your time? On what exactly would you be spending your time? Would you just keep trying to clock hours after hours of pure DP on Concrete Thinking? Would you also work on DS?

Why am I looking at DS?

Based on 80khours, I came to the conclusion of working on DS, because it will give me more money(1.5 times in the US) than an engineer, and I could move to the US (for cryonics and more money than here). Everyone from different backgrounds are able to do DS so it should be easy to move. I personally know many people who have moved to DS without too much difficulty after a masters in TU Delft. The route I envision is to start DS work within this year and move to some big DS company in a few years (2-3years) and do a lot of “critical thinking”and make my way to some EAO like say GiveWell within the next 5-8 years and really start saving large numbers of people.

Should I be working on DP for CT completely instead?

Why I ask this is because I am not sure of the consequence of working on this (DP for concrete Thinking), i.e., I don’t have an example where this makes a “difference” in my life. I don’t know how to compare DS and DP for CT. But I will take your word for it and slog my ass off atleast for the coming 4 weeks (just the beginning)(>4hrs per day average of DP guaranteed). Also, over the last few weeks there has been a dip in my amount of hours clocked, so I SUCK and I don’t want to SUCK. 4.15 hours on a good week and 2.9 hrs/day last week (I start half hour after I have had dinner, I take long breaks scrolling FB etc…, including weekends.). (I count work on DS, and DP for concrete thinking together in the above.)

I need your thoughts on this. I am not sure how this 1 hr per day od DP for CT is helping, as I barely get shit done in 1 hr (5 claims, sometimes 1 claim, as things take time to puzzle out.). I stop before I get in the groove. If DS should not be my focus right now, I am more than willing to stop with course 8/10 (20 more hrs of work) and not get closure and not be able to save face when people ask why I am not done with these courses yet.

I don’t want to do 1 hr if it is the most important thing to focus on. I don’t want to do random phrases from texts and move on to others as it gets hard for me. I want to take a full blown essay (80khours key ideas) or chapter from some book on regression and tear it apart over how many ever days it takes full time. Why? that way I guess I get some work done with Large repetitions. Also I can gather some statistics of work done per hour on similar work and compare over the course of the exercise.

So,

Thalaiva, What would you do if you were me? Let’s go big or go home!

And last but not the least, Can you please do this for me? It will help me big time to compare and improve. Can you take this section on Longtermism (5 small paragraphs) and detail it out for me as if you were submitting it for correction by including the phrases the claims and the examples. I am having several questions as I have shown my take, I would like to see you do it and carry that “attitude” throughout the entire essay. It feels like this is one type of essay vagueness I need to handle. And for most parts I am wondering what depth I should go in etc… as discussed below.

I know you always say send me 200 phrases! Name your price for this, just this one time.

Thank You for everything. Cheers!

Feedback checklist

Feedback checklist:

  1. Could it be that this claim has no any example at all? For example, “civilization is at stake”.

  2. Could this claim be false? Remember the “there is no doubting” example.

  3. Does this claim say anything about “best” (need to compare against the entire set) or “most” (need to show it’s the majority in the set) or “no” (need to show that nothing in the set matches)?

  4. Did you stick to examples that are in the chapter itself? That way you don’t have to search online for too long.

  5. Did you use a running example for a technical phrase? There will be lots of new phrases in the book, like “convergent instrumental value” and “orthogonality thesis”. Whenever you see them, you should recall whatever running example you’ve used.

  6. If this is an “if-then” claim, did you either get a concrete example or mark it as having no example?

Short names: none; false; best; chapter; running; if-then.

Please refer to the checklist after every claim analysis to ensure you’re not making old mistakes. If you want to add to the checklist based on mistakes found in past feedback, that’s great.

Mission

Mission #9: Your mission, should you choose to accept it, is to concretely analyze the key claims in the book Superintelligence by Nick Bostrom (the book mentioned in the Elon Musk tweet above). He’s a PhD at Oxford who’s been writing about AI safety along with guys like Eliezer for nearly two decades. The book has detailed arguments and examples about all the topics like possible paths to “superintelligence” (whatever that means), types of “superintelligence”, the control problem, etc.

No need to write “Question: “ - doesn’t seem to have changed your answers.

Don’t have to go sentence by sentence; look at one key claim for each section, usually the one in the first few paragraphs, or one for each paragraph if you feel it’s an important section. For example:

CHAPTER 2 Paths to superintelligence

Machines are currently far inferior to humans in general intelligence. Yet one day (we have suggested) they will be superintelligent. How do we get from here to there? This chapter explores several conceivable technological paths. We look at artificial intelligence, whole brain emulation, biological cognition, and human-machine interfaces, as well as networks and organizations. We evaluate their different degrees of plausibility as pathways to superintelligence. The existence of multiple paths increases the probability that the destination can be reached via at least one of them.

The key claim is “How do we get from here to there? Answer: Artificial intelligence, whole brain emulation, …”

Claim

I am possibly going to enjo the experience much much more than the last few days of fighting to complete the [2] hours. It seems like this is a signal for an panindian pandian to do something else or do it differently.

Without Examples I am nothing!

Questions to an STM

How to identify claims of importance?

What is the goal here?

People are making so many claims (as listed in black in the book)?

How do you go about it? 2