Comment author: FourFire 22 May 2015 10:11:22PM 0 points [-]

I had a pleasant and intellectually enticing time tonight.

I look forward to further meetups and invite anyone who couldn't make it tonight to join us in the future.

Reminder: Oslo Lesswrong meetup...

4 FourFire 22 May 2015 06:54AM

... is happening 17:00 local time today at the UiO Science library.

 

There will be cookies and popcorn, and those other reasons for attending a meetup.

If enough of the people who turn up are interested, this may become an annual, monthly, or even weekly event.

Current topics we will be discussing (subject to change of course):

* Introduction

* Raising the sanity waterline

* Effective Altruism

* Transhumanism / Futurism

(and sub-topics thereof)

 

There will be a sign with explicit directions to where in the building the meetup is located at the entrance.

Further planned meetups will be fleshed out then.

I look forward to seeing you there ;)

 

Comment author: [deleted] 04 May 2015 11:50:22AM *  22 points [-]

I am not sure for how many people it is true, but my own bad-at-mathness is largely about being bad at reading really terse, dense, succint text, because my mind is used to verbose text and thus filtering out half of it or not really paying close attention.

I hate the living guts out of notation, Greek variables or single-letter variables. Even the Bayes theorem is too terse, succint, too information-dense for me. I find it painful that in something like P(B|A) all three bloody letters mean a different thing. It is just too zipped. I would far more prefer something more natural langauge like Probability( If-True (Event1), Event2) (this looks like a software code - and for a reason).

This is actually a virtue when writing programs, I am never the guy who uses single letter variables, my programs are always like MarginPercentage = DivideWODivZeroError((SalesAmount-CostAmount), SalesAmount) * 100. So never too succint, clearly readable.

Let's stick to the Bayer Theorem. My brain is screaming don't give me P, A, B. Give me "proper words" like Probability, Event1, and Event2. So that my mind can read "Pro...", then zone out and rest while reading "bability" and turn back on again with the next word.

This is basically the inability to focus really 100%, needing the "fillers", the low information density of natural language text for allowing my brain to zone out and rest for fractions of a second, of finding too dense, too terse notation, where losing a single letter means not understanding the problem.

This is largely a redudancy problem. Natural language is redundant, you can say "probably" as "prolly" and people still understand it - so your mind can zone out during reading half of a text and you still get its meaning. Math notation is highly not redundant, miss one single tiny itty bitty letter and you don't understand a proof.

So I guess I could be better at math if there was an inflated, more redudant, not single-letter-variables, more natural language like version of it.

I guess programming fills that gap well.

I figure Scott does not like terse, dense notation either, however he seems to be good at doing the work of inflating it to something more readable for himself.

I guess I am not reinventing warm water here. There is probably a reason why a programmer would more likely write Probability(If-True(Event1), Event2) than P(A|B) - this is more understandable for many people. I guess it should be part of math education to learn to cope with the denser, terser, less redundant second notation. I guess my teachers did not really manage to impart that to me.

In response to comment by [deleted] on Is Scott Alexander bad at math?
Comment author: FourFire 08 May 2015 12:59:01PM *  0 points [-]

I myself consider that a large degree of why I find myself to be bad at math is because I have spent very little time really trying to do math as a result of it being actually mentally painful to do due to this effect.

This makes me sad, because even without accomplishment, I feel as if my reasoning ability is "merely above average" and I have no apparent way of leveraging that besides hacking that into making me seem more verbally intelligent (a lame result in my opinion).

However I'm not a programmer either, yet.

Comment author: cousin_it 02 June 2010 09:42:34AM 3 points [-]

Yep, it was probably the first rationalist joke ever that made me laugh.

Comment author: FourFire 07 May 2015 05:58:23PM 0 points [-]

I didn't see that until right now, made me chuckle.

Meetup : Oslo LessWrong Meetup

1 FourFire 06 May 2015 05:39AM

Discussion article for the meetup : Oslo LessWrong Meetup

WHEN: 22 May 2015 05:15:00PM (+0200)

WHERE: Vilhelm Bjerknes Hus, 0851 Oslo

Welcome to the first annual Oslo meetup. We will be meeting up quarter past five on Friday, inside the University of Oslo's STEM library, which is open until 21:55. A second meetup will take place on the following Saturday at this location: http://map.what3words.com/softest.desire.hurry Here is the timetable: http://doodle.com/4t9z7zs3braz6fdp I'm looking forward to seeing some of you lurkers there ;)

EDIT: in light of this comment: http://lesswrong.com/lw/bc2/setting_up_lw_meetups_in_unlikely_places_positive/671o

I will be making at least one sign so people need not worry about finding their way :)

Discussion article for the meetup : Oslo LessWrong Meetup

Comment author: Anders_H 25 April 2015 11:49:09PM 1 point [-]

Thank you for organizing this. Oslo is my hometown, and I will definitely be there if it coincides with a trip home. I'll find you on freenode later and send you some information about possible attendees from a previous attempt at organizing an Oslo meetup

Comment author: FourFire 26 April 2015 11:05:28AM 1 point [-]

Please do.

Meetup: Oslo LessWrong meetup planning thread.

3 FourFire 25 April 2015 09:09PM

After some deliberation, I realized that an Oslo meetup wasn't going to happen by itself, and after my somewhat abortive (and way too late) attempt to organize a PI day HP:MoR wrapup meeting, I decided to just do it, and announce that it's happening, four weeks in advance.

Please add yourself to the coordination timetable.

Here are the possible locations:

UiO Science library (only open on the friday).

Hackerspace-Makerspace.

 

I'd prefer to meet up at the Library, because it seems like the perfect place for a LessWrong meetup, and it's open to the public until 10PM.
However if it doesn't suit enough people, timewise, then I do have 24/7 access to the hackerspace as a backup plan, and there is food nearby.

Please respond with any suggestions as this really is my first time organizing anything, I am aware of the guide.

 

Edit: Unfortunately That Social Media Site has locked both of my accounts, so contact me either via freenode (FourFire), PM here, or email me.

 

 

Comment author: Florian_Dietz 02 February 2015 07:32:51AM 6 points [-]

The nanobots wouldn't have to contain any malicious code themselves. There is no need for the AI to make the nanobots smart. All it needs to do is to build a small loophole into the nanobots that makes them dangerous to humanity. I figure this should be pretty easy to do. The AI had access to medical databases, so it could design the bots to damage the ecosystem by killing some kind of bacteria. We are really bad at identifying things that damage the ecosystem (global warming, rabbits in australia, ...), so I doubt that we would notice.

Once the bots have been released, the AI informs the gatekeeper of what it just did and says that it is the only one capable of stopping the bots. Humanity now has a choice between certain death (if the bots are allowed to wreak havoc) and possible but uncertain death (if the AI is released). The AI wins through blackmail.

Note also that even a friendly, utilitarian AI could do something like this. The risk that humanity does not react to the blackmail and goes extinct may be lower than the possible benefit from being freed earlier and having more time to optimize the world.

Comment author: FourFire 17 February 2015 06:52:08PM *  0 points [-]

That method of attack would only work for a tiny fraction of possible gatekeepers. The question, of replicating the feats of Elezier and Tuxedage, can only be answered by a multitude of such fractionally effective methods of attack, or a much smaller number, broader methods. My suspicions are that Tuxedage's attacks in particular involve leveraging psychological control mechanisms into forcing the gate keeper to be irrational, and then leverage that.

Otherwise, I claim that your proposition is entirely too incomplete without further dimensions of attack methods to cover some of the other probabilty space of gatekeeper minds.

Comment author: SanguineEmpiricist 17 January 2015 06:05:26AM *  3 points [-]

Uh. I don't know, you see many more dimensions that causes you to harshly devalue a significant amount of individuals while finding you missed out of many good people. Less Wrong people are incredibly hit or miss, and many are "effective narcissists" and have highly acute issues that they use their high verbal intelligence to argue against.

Also there exists a tendency for speaking in extreme declarative statements and using meta-navigation in conversations as a crutch for lack of fundamental social skills. Furthermore I have met many quasi-famous LW people that are unethical in a straightforward fashion.

A large chunk of less wrong people you meet, including named individuals turn out to be not so great, or great in ways other than intelligence that you can appreciate them for. The great people you do meet however significantly make up for and surpass losses.

When people talk about "smart LW people" they often judge via forum posts or something, when that turns out to be only a moderately useful metric. If you ever meet the extended community I'm sure you will agree. It's hard for me to explain.

tl;dr Musk is just more trustworthy and competent overall unless you are restricting yourself to a strict subset of Less Wrong people. Also LW people tend to overestimate how advanced they are compared to other epistemic blocs that are as elite, or are more elite.

http://lesswrong.com/user/pengvado/ <---- is some one I would trust. Not every other LW regular.

Comment author: FourFire 18 January 2015 01:19:22AM 2 points [-]

Your comment is enlightening, thanks for sharing your thoughts.

Comment author: maxikov 14 December 2014 03:19:28AM 0 points [-]
Comment author: FourFire 02 January 2015 09:15:49PM 0 points [-]

The video appears to be private, which is unfortunat since I was interested in watching how the event progressed.

View more: Prev | Next