Some of the later levels on this?
https://en.wikipedia.org/wiki/Notpron
“Notpron is an online puzzle game and internet riddle created in 2004 by German game developer David Münnich. It has been named as ‘the hardest riddle available on the internet.’”
“Notpron follows a standard puzzle game layout, where the player is presented with a webpage containing a riddle and must find the answer to the riddle in order to proceed to the next webpage”
“Each level answer or solution is unique, often requiring specific skills such as decoding ciphers,&n...
My model of a non-technical layperson finds it really surprising that an AGI would turn rogue and kill everyone. For them it’s a big and crazy claim.
They imagine that an AGI will obviously be very human-like and the default is that it will be cooperative and follow ethical norms. They will say you need some special reason why it would decide to do something so extreme and unexpected as killing everyone.
When I’ve talked to family members and non-EA friends that’s almost always the first reaction I get.
If you don’t address that early in the introduction I th...
I’ll give it a go.
I’m not very comfortable with the term enlightened but I’ve been on retreats teaching non-dual meditation, received ‘pointing out instructions’ in the Mahamudra tradition and have experienced some bizarre states of mind where it seemed to make complete sense to think of a sense of awake awareness as being the ground thing that was being experienced spontaneously, with sensations, thoughts and emotions appearing to it — rather than there being a separate me distinct from awareness that was experiencing things ‘using my awareness’, which is...
Great summary, and really happy that this helped you!
I'd recommend people read Rick Hanson's paper on HEAL, if they're interested too: https://rickhanson.net/wp-content/uploads/2021/12/LLPE-paper-final2.pdf
Does it make sense to put any money into a pension given your outlook on AGI?
I really like the way it handles headlines and bullet point lists!
In an ideal world I'd like the voice to sound less robotic. Something like https://elevenlabs.io/ or https://www.descript.com/overdub. How much I enjoy listening to text-to-speech content depends a lot on how grating I find the voice after long periods of listening.
Honestly, no plans at the moment. Writing these was a covid lockdown hobby. It's vaguely possible I'll finish it one day but I wouldn't hold your breath. Sorry.
But I rarely see anyone touch on the idea of "what if we only make something as smart as us?"
But why would intelligence reach human level and then halt there? There's no reason to think there's some kind of barrier or upper limit at that exact point.
Even in the weird case where that were true, aren't computers going to carry on getting faster? Just running a human level AI on a very powerful computer would be a way of creating a human scientist that can think at 1000x speed, create duplicates of itself, modify it's own brain. That's already a superintelligence isn't it?
A helpful way of thinking about 2 is imagining something less intelligent than humans trying to predict how humans will overpower it.
You could imagine a gorilla thinking "there's no way a human could overpower us. I would just punch it if it came into my territory."
The actual way a human would overpower it is literally impossible for the gorilla to understand (invent writing, build a global economy, invent chemistry, build a tranquilizer dart gun...)
The AI in the AI takeover scenario is that jump of intelligence and creativity above us. There's literally no way a puny human brain could predict what tactics it would use. I'd imagine it almost definitely involves inventing new branches of science.
I think that's true of people like: Steven Pinker and Neil deGrasse Tyson. They're intelligent but clearly haven't engaged with the core arguments because they're saying stuff like "just unplug it" and "why would it be evil?"
But there's also people like...
Robin Hanson. I don't really agree with him but he is engaging with the AI risk arguments, has thought about it a lot and is a clever guy.
Will MacAskill. One of the most thoughtful thinkers I know of, who I'm pretty confident will have engaged seriously with the AI Risk arguments. His p(doom) is far lower...
I find Eliezer and Nates' arguments compelling but I do downgrade my p(doom) somewhat (-30% maybe?) because there are intelligent people (inside and outside of LW/EA) who disagree with them.
I had some issues with the quote
Will continue to exist regardless of how well you criticize any one part of it.
I'd say LW folk are unusually open to criticism. I think if there were strong arguments they really would change people's minds here. And especially arguments that focus on one small part at a time.
But have there been strong arguments? I'd love to read them.
&nb...
For me the core of it feels less like trying to "satisfying the values you think you should have, while neglecting the values you actually have" and more like having a hostile orientation to certain values I have.
I might be sitting at my desk working on my EA project and the parts of me that are asking to play video games, watch arthouse movies, take the day off and go hiking, find a girlfriend are like yapping dogs that won't shut up. I'll respond to their complaints once I've finished saving the world.
Through CFAR workshops, lots of goal factoring, journ...
You are however only counting one side here
In that comment I was only offering plausible counter-arguments to "the amount of people that were hurt by FTX blowing up is a rounding error."
How to model all the related factors is complicated. Saying that you easily know the right answer to whether the effects are negative or positive in expectation without running any numbers seems to me unjustified.
I think we basically agree here.
I'm in favour of more complicated models that include more indirect effects, not less.
Maybe the difference is: I think ...
If you believe that each future person is as valuable as each present person and there will be 10^100 people in the future lightcone, the amount of people that were hurt by FTX blowing up is a rounding error.
But you have to count the effect of the indirect harms on the future lightcone too. There's a longtermist argument that SBF's (alleged and currently very likely) crimes plausibly did more harm than all the wars and pandemics in history if...
The frequency with which datacenters, long range optical networks, and power plants, require human intervention to maintain their operations, should serve as a proxy to the risk an AGI would face in doing anything other than sustaining the global economy as is.
Probably those things are trivially easy for the AGI to solve itself e.g. with nanobots that can build and repair things.
I'm assuming this thing is to us what humans are to chimps, so it doesn't need our help in solving trivial 21 century engineering and logistics problems.
The strategic c...
I was looking for exactly this recently.
I haven’t looked into his studies’ methodologies, but from my experience with them, I would put high odds that the 65% number is exaggerated.
From his sales page
"In our scientific study involving 245 people...
65% of participants who completed The 45 Days to Awakening Challenge and Experiment persistently awakened.
...
Another couple hundred people entered the program already in a place of Fundamental Wellbeing..."
Sounds like he's defining enlightenment as something that ~50% of people already experience.
Elsewhere he describes 'Location 1' enlightenment ...
Does anybody know if the Highlights From The Sequences are compiled in ebook format anywhere?
Something that takes 7 hours to read, I want to send to my Kindle and read in a comfy chair.
And maybe even have audio versions on a single podcast feed to listen to on my commute.
(Yes, I can print out the list of highlighted posts and skip to those chapters of the full ebook manually but I'm thinking about user experience, the impact of trivial inconveniences, what would make Lesswrong even more awesome.)
I love his books too. It's a real shame.
"...such as imagining that an intelligent tool will develop an alpha-male lust for domination."
It seems like he really hasn't understood the argument the other side is making here.
It's possible he simply hasn't read about instrumental convergence and the orthogonality thesis. What high quality widely-shared introductory resources do we have on those after all? There's Robert Miles, but you could easily miss him.
I'm imagining the CEO having a thought process more like...
- I have no idea how my team will actually react when we crack AGI
- Let's quickly Google 'what would you do if you discovered AGI tomorrow?'*
- Oh Lesswrong.com, some of my engineering team love this website
- Wait what?!
- They would seriously try to [redacted]
- I better close that loophole asap
I'm not saying it's massively likely that things play out in exactly that way but a 1% increased chance that we mess up AI Alignment is quite bad in expectation.
*This post is already the top result on Google for that particular search
I immediately found myself brainstorming creative ways to pressure the CEO into delaying the launch (seems like strategically the first thing to focus on) and then thought 'is this the kind of thing I want to be available online for said CEOs to read if any of this happens?'
I'd suggest for those reasons people avoid posting answers along those lines.
Somebody else might be able to answer better than me. I don't know exactly what each researcher is working on right now.
“AI safety are now more focused on incidental catastrophic harms caused by a superintelligence on its way to achieve goals”
Basically, yes. The fear isn’t that AI will wipe out humanity because someone gave it the goal ‘kill all humans’.
For a huge number of innocent sounding goals ‘incapacitate all humans and other AIs’ is a really sensible precaution to take if all you care about is getting your chances of failure down to zero. As is hidi...
I read the article and I have to be honest I struggled to follow her argument or to understand why it impacts your decision to work on AI alignment. Maybe you can explain further?
The headline "Debating Whether AI is Conscious Is A Distraction from Real Problems" is a reasonable claim but the article also makes claims like...
"So from the moment we were made to believe, through semantic choices that gave us the phrase “artificial intelligence”, that our human intelligence will eventually contend with an artificial one, the competition began... The reality is...
I suspect you should update the website with some of this? At the very least copying the above comment into a 2022 updates blog post.
The message 'CFAR did some awesome things that we're really proud of, now we're considering pivoting to something else, more details to follow' would be a lot better than the implicit message you may be sending currently 'nobody is updating this website, the CFAR team lost interest and it's not clear what the plan is or who's in charge anymore'
I strongly agree
If somebody has time to pour into this I'd suggest recording an audio version of Mad Investor Chaos.
HPMOR reached a lot more people thanks to Eneasz Brodski's podcast recordings. That effect could be much more pronounced here if the weird glowfic format is putting people off.
I'd certainly be more likely to get through it if I could play it in the background whilst doing chores, commuting or falling asleep at night.
That's how I first listened to HPMOR, and then once I'd realised how good it was I went back and reread it slowly, taking notes, making an effort to internalize the lessons.
I have a sense of niggling confusion.
This immediately came to mind...
"The only way to get a good model of the world inside your head is to bump into the world, to let the light and sound impinge upon your eyes and ears, and let the world carve the details into your world-model. Similarly, the only method I know of for finding actual good plans is to take a bad plan and slam it into the world, to let evidence and the feedback impinge upon your strategy, and let the world tell you where the better ideas are." - Nate Soares, https://mindingourway.com/dive-in-...
If you haven't already, I'd suggest you put a weekend aside and read through the guides on https://80000hours.org/
They have some really good analyses on when you should do a PhD, found a startup, etc.
what are some signs that someone isn’t doing frame control? [...]
- They give you power over them, like indications that they want your approval or unconditional support in areas you are superior to them. They signal to you that they are vulnerable to you.
There was a discussion on the Sam Harris podcast where he talks about the alarming frequency at which leaders of meditation communities end up abusing, controlling or sleeping with their students. I can't seem to find the episode name now.
But I remember being impressed with the podcast guest, a meditat...
- Erasable pens. Pens are clearly better than pencils in that you can write on more surfaces and have better colour selection. The only problem is you can’t erase them. Unless they’re erasable pens that is, then they strictly dominate. These are the best I’ve found that can erase well and write on the most surfaces.
I also loved these Frixion erasable pens when I discovered them.
But another even better step up in my writing-by-hand experience was the reMarkable tablet. Genuinely feels like writing on paper — but with infinite pages, everything syn...
Thanks Richard. Edited.
Thanks for the encouragement. Appreciate it :)
I've got this printed out on my desk at home but unfortunately I'm away on holiday for the next few weeks. I'll find it for you when I get back.
For what it's worth most of the ideas for this chapter comes from Stanislas Dehaene's book Consciousness and the Brain. Kaj Sotala has a great summary here and I'd recommend reading the whole book too if you've got the time and interest.
Well spotted! The Psychomagic for Beginners excerpt certainly takes some inspiration from that. I read that book a few years ago and really enjoyed it too.
Thanks Ustice!
I've already written first drafts of a couple more chapters which I'll be polishing and posting over the next few months.
So I can guarantee at least a few more installments. After that it will depend on what kind of response I get and whether I'm still enjoying the writing process.
Early in HPMOR there's a bit where Harry mentions the idea of using magic to improve his mind but it's never really taken much further.
I wanted to write about that: if you lived in a universe with magic how could you use it to improve your intelligence and rationali...
I'm afraid so. Sorry. We hope to run more in the future!