vi21maobk9vp

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

It looks like in reality we (the 3 people with «yes» responses) will be gathering around 13:00–13:10 at the same place (but we plan to also be there at 14:00 as promised).

(I am the organiser and I have posted some of the previous iterations myself, this time the event is created from the Meetups Everywhere list)

Based on private messages and experience, I do expect the meetup to have at least two and no more than five participants. Based on the last time, I expect that we will meet around 14:00 (the posted time) and will stay around 15 minutes within the line of sight of the specified meeting place, with the sign visible. Afterwards, no promises about not putting away the sign, and no idea if we move to a different part of the park / to a neaby place (depends on weather and mood). I won't have access to email there (so as posted PM me a number if you want an SMS when we decide on further movements).

Great!

I wouldn't be sure I count as a rationalist; I read everything on ACX (and have read SSC, including backlog), but I have stopped reading LW after not that long.

I have moved to Bordeaux a year ago (right after the move I wasn't able to commit in advance to being available on any specific day, so I didn't run a meetup last September), but during the spring Meetups Everywhere there was a person who showed up and has lived in Bordeaux longer than me, so I guess you two would have a chance to run your small ACX meetup a year or two ago…

I do indeed work on Peixotto campus (like the other person who was at the spring meetup — not sure if they will come this time).

Je ne suis pas sûr si je compte comme un rationaliste… (Par exemple, je crois que la position de Scott Aaronson sur AI est beaucoup plus raisonnable que ce qu'on trouve sur LW).

Mais pour un réunion «ACX en général» je crois qu'on va trouver un sujet intéréssant à tous pour parler … si il y a de «tous», et je ne suis pas 100% sûr.

And you are completely right.

I meant that designing a working FOOM-able AI (or non-FOOMable AGI, for that matter) is vastly harder than finding a few hypothetical hihg-risk scenarios.

I.e. walking the walk is harder than talking the talk.

If we are not inventive enough to find a menace not obviously shielded by lead+ocean, more complex tasks like, say, actually designing FOOM-able AI is beyond us anyway…

You say "presumably yes". The whole point of this discussion is to listen to everyone who will say "obviously no"; their arguments would automatically apply to all weaker boxing techniques.

How much evidence do you have that you can count accurately (or make a corect request to computer and interpret results correctly)? How much evidence that probability theory is a good description of events that seem random?

Once you get as much evidence for atomic theory as you have for the weaker of the two claims above, describing your degree of confidence requires more efforts than just naming a number.

I guess that understanding univalence axiom would be helped by understanding the implicit equality axioms.

Univalence axiom states that two isomorphic types are equal; this means that if type A is isomorphic to type B, and type C contains A, C has to contain B (and a few similar requirements).

Requiring that two types are equal if they are isomorphic means prohibiting anything that we can write to distinguish them (i.e. not to handle them equivalently).

Load More