Certainly; it wasn't my intention to make it seem like an 'either-or'. I believe there's a lot of room for imported quality teaching, and a fairly well-educated volunteer might be better at teaching than the average local teacher. I didn't find how they taught there too effective: a lot of repeating the teacher's words, no intuition built for maths or physics…I think volunteers could certainly help with that. Also by teaching the subjects they are more proficient at than the local teachers (e.g. English). I agree there is the potential to use volunteers in a variety of ways to raise the level of education, and also to try to make the changes permanent once the volunteers leave.
Strong upvote. I found that almost every sentence was extremely clear and conveyed a transparent mental image of the argument made. Many times I found myself saying to myself "YES!" or "This checks" as I read a new point.
That might involve not working on a day you’ve decided to take off even if something urgent comes up; or deciding that something is too far out of your comfort zone to try now, even if you know that pushing further would help you grow in the long term
I will add that, for many routine activities or personal dilemmas with short- and long-ter...
Thank you very much for this sequence. I knew fear was a great influence (or impediment) over my actions, but I hadn't given it such a concrete form, and especially a weapon (= excitement) to combat it, until now.
Following matto's comment, I went through the Tunning Your Cognitive Strategies exercise, spotting microthoughts and extracting the cognitive strategies and deltas between such microthoughts. When evaluating a possible action, the (emotional as much as cognitive) delta "consider action X -> tiny feeling in my chest or throat -> meh, I'm not ...
Thank you very much for this post, I find it extremely valuable.
I also find it especially helpful for this community, because it touches on what I believe are two main sources of anxiety over existential dread that might be common among LWers:
As others have pointed out, there's a difference between a) problems to be tackled for the sake of the solution, vs b) problems to be tackled for the sake (or fun) of the problem. Humans like challenges and puzzles and to solve things themselves rather than having the answers handed down to them. Global efforts to fight cancer can be inspiring, and I would guess a motivation for most medical researchers is their own involvement in this same process. But if we could push a button to eliminate cancer forever, no sane person would refuse to.
I think we should ...
The phases you mentioned in learning anything seem especially relevant for sports.
1. To have a particular kind of feelings (felt senses) that represent something (control, balance, singing right, playing the piano right, everything being done)
2. A range of intensity that we should keep that feeling sense in, in some given context (either trying to make sure we have some positive feeling, or that we avoid some negative feeling)
3. Various strategies for keeping it within that range
Below the surface, every sport is an extremely complex ...
Thank you for your explanations. My confusion was not so much from associating agency with consciousness and morality or other human attributes, but with whether it was judged from an inside, mechanistic point of view, or from an outside, predicting point of view of the system. From the outside, it can be useful to say that "water has the goal to flow downhill", or that "electrons have the goal to repel electrons and attract protons", inasmuch as "goal" is referred to as "tendency". From an inside view, as you said, it's nothing like the agency we know; th...
(I reply both to you and @Ericf here). I do struggle a bit to make up my mind on whether drawing a line of agency is really important. We could say that a calculator has the 'goal' of returning the right result to the user; we don't treat a calculator as an agent, but is it because of its very nature and the way in which it was programmed, or is it for a matter of capabilities, it being incapable of making plans and considering a number of different paths to achieve its goals?
My guess is that there is something that makes up an agent and which has to do wi...
This seems to me more like a tool AI, much like a piece of software asked to carry out a task (e.g. an Excel sheet for doing calculations), but with the addition of processes or skills for the creation of plans and searches for solutions which would endow it with an agent-like behaviour. So, for the AutoGPT-style AI here contemplated, it appears to me like this agent-like behaviour would not emerge out of the AI's increased capabilities and achievement of general intelligence to reason, devise accurate models of the world and of humans, and plan; nor would...
The first times I read LW articles and especially those by Eliezer, it was common for me to think that I simply wasn't smart enough to follow their lines of argumentation. It's precisely missing these buckets and handles, these modes of thoughts and the expressions/words used to communicate them, what made it hard at the start; as I acquired them, I could feel I belonged the community. I suppose this occurs to all newbies, and it's understandable to feel this helplessness and inability to contribute for as long as you haven't acquired the requisite materia...
I like this model, much of which I would encapsulate in the tendency to extrapolate from past evidence, not only because it resonates with the image I have of the people who are reluctant to take existential risks seriously, but because it is more fertile for actionable advice than the simple explanation of "because they haven't sat down to think deeply about it". This latter explanation might hold some truth, but tackling it would be unlikely to make them take more actions towards reducing existential risks if they weren't aware of, and weren't able to fi...
The broad spirit they want to convey with the word "generalisation", which is that two systems can exhibit the same desired behaviour in training but result in completely different goals in testing or deployment, seems fair as the general problem. But I agree that to generalise can give the impression that it's an "intentional act of extrapolation", to create a model that is consistent with a certain specification. And there are many more ways in which the AI can behave well in training and not in deployment, without need to assume it's extrapolating a mod...
This is a really complicated issue because different priors and premises can lead you to extremely different conclusions.
For example, I see the following as a typical view on AI among the general public:
(the common person is unlikely to go this deep into his reasoning, but could come to these arguments if he had to debate on it)
Premises: "Judging by how nature produced intelligence, and by the incremental progress we are seeing in LLMs, artificial intelligence is likely to be achieved by packing more connections into a digital system. This will allow the A...
I have been using the Narwhal app for the past few days, a social discussion platform "designed to make online conversations better" that is still at its prototype stage. This is how it basically works: there are several topics of discussion posted by other users, formulated with an initial question (e.g. "How should we prioritise which endangered species to protect?" or "Should Silicon Valley be dismantled, reformed, or neither?") and a description, and you can comment on any or reply to others' comments. You can also suggest your own discussions.
Here are...
Nice to hear the high standards you continue to pursue. I agree that LessWrong should set itself much higher standards than other communities, even than other rationality-centred or -adjacent communities.
My model of this big effort to raise the sanity waterline and prevent existential catastrophes contains three concentric spheres. The outer sphere is all of humanity; ever-changing yet more passive. Its public opinion is what influences most of the decisions of world leaders and companies, but this public opinion can be swayed by other, more directed force...
Could we take from Eliezer's message the need to redirect more efforts into AI policy and into widening the Overton window to try, in any way we can, to give AI safety research the time it needs? As Raemon said, the Overton window might be widening already, making more ideas "acceptable" for discussion, but it doesn't seem enough. I would say the typical response from the the overwhelming majority of the population and world leaders to misaligned AGI concerns still is to treat them as a panicky sci-fi dystopia rather than to say "maybe we should stop every...
I partly support the spirit behind this feature, of providing more information (especially to the commenter), making the readers more engaged and involved, and expressing a reaction with more nuance than with a mere upvote/downvote. I also like that, as with karma, there are options for negative (but constructive) feedback, which I mentioned here when reviewing a different social discussions platform that had only positive reactions such as "Aha!" and "clarifying".
In another sense, I suspect (but could be wrong) that this extra information could also have ... (read more)