Comment author: ZeitPolizei 14 August 2016 04:41:09PM *  0 points [-]

Yeah, the estimates will always be subjective to an extent, but whether you choose historic figure, or all humans and fictional characters that ever existed or whatever, shouldn't make huge differences to your results, because, in Bayes' formula, the ratio P(C|E)/P(C) ¹ should always be roughly the same, regardless of filter.

¹ C: coin exists
E: person existed

Comment author: TheAncientGeek 27 May 2016 07:22:41PM *  0 points [-]

The stronger someones imaginative ability is, the more their imagining an experience is actually having it, in terms of brain states....and the less it s a counterexample to anything relevant.

If the knowedge the AI gets from the colour routine is unproblematically encoded in a string of bits, why can't it just look at the string of bits...for that matter, why can't Mary just look at the neural spike trains of someone seing red?

Comment author: ZeitPolizei 27 May 2016 11:57:02PM 1 point [-]

why can't Mary just look at the neural spike trains of someone seing red?

Why can't we just eat a picture of a plate of spaghetti instead of actual spaghetti? Because a representation of some thing is not the thing itself. Am I missing something?

Comment author: ShardPhoenix 26 May 2016 07:06:35AM *  6 points [-]

Consider a situation where Mary is so dexterous that she is able to perform fine-grained brain surgery on herself. In that case, she could look at what an example of a brain that has seen red looks like, and manually copy any relevant differences into her own brain. In that case, while she still never would have actually seen red through her eyes, it seems like she would know what it is like to see red as well as anyone else.

I think this demonstrates that the Mary's room though experiment is about the limitations of human senses/means of learning, and that the apparent sense of mystery it has comes mainly from the vagueness of what it means to "know all about" something. (Not saying it was a useless idea - it can be quite valuable to be forced to break down some vague or ambiguous idea that we usually take for granted).

Comment author: ZeitPolizei 26 May 2016 05:28:15PM 0 points [-]

The AI analogue would be: If the AI has the capacity to wirehead itself, it can make itself enter the color perception subroutines. Whether something new is learned depends on the remaining brain architecture. I would say, in the case of humans, it is clear that whenever something new is experienced, the human learns what that experience feels like. I reckon that for some people with strong visualization (in a broad sense) abilities it is possible to know what an experience feels like without experiencing first hand by synthesizing a new experience from previously known experiences. But in most cases there is a difference between imagining a sensation and experiencing it.

In the case of the AI, there could either be the case where no information is passed between the color perception subroutine and the main processing unit, in which case the AI may have a new experience, but not learn anything new. Or some representation of the experience of being in the subroutine is saved to memory, in which case something new is learned.

Comment author: ZeitPolizei 30 April 2016 04:02:52AM *  1 point [-]

From the cover text of How to Build a Brain it seems the main focus is on the architecture of SPAWN, and I suspect it does not actually give a proper introduction to other areas of computational neuroscience. That said, I wouldn't be surprised if it is the most enjoyable book to read on the topic, that you can find. I have read Computational Neuroscience by Hanspeter Mallot, which is very short, weird and not very good. I'm currently about halfway through Theoretical Neuroscience by Dayan and Abbott. My impression is, it might be decent for people with a strong physics/math background, it's OK if you have some prior knowledge about the topics (e.g. having visited a lecture) and rather bad otherwise.

Edit: My prof told me about Information Theory, Inference and Learning Algorithms (legal free online version), which is, as the title implies more about information theory and learning algorithms (so more mathy), but from the perspective of neuroscience, so it's missing a lot of the typical topics of computational neuroscience. I have just started reading it, but so far it seems really well written (4.35 rating on goodreads), and it also contains exercises and reflection questions.

Comment author: ZeitPolizei 12 September 2015 05:00:29PM 3 points [-]

All the links direct me to Ohio State University email login.

Comment author: ArisKatsaris 01 September 2015 10:43:51PM 3 points [-]

Nonfiction Books Thread

Comment author: ZeitPolizei 04 September 2015 09:46:20PM 1 point [-]

Human Learning and Memory, by Dadid A. Lieberman (2012)

A well-written overview of current knowledge about human learning and memory. Of special interest: * the use of reinforcement as a teacher, parent, pet-owner or for self-improvement * for me personally: strategy to combat insomnia (results pending) * implications of memory research for study strategies

Comment author: philh 06 August 2015 10:07:00PM 0 points [-]

Personally I'm more interested in knowing how much powder I have than how much water. (Though I'm not sure how accurate I can really be based on the volume markings - weight would be more accurate, but also a hassle.)

A small amount of water seems to work, though it does increase uncertainty in powder volume a bit.

Comment author: ZeitPolizei 07 August 2015 06:21:42AM 0 points [-]

I used a measuring cup (iirc 75ml) for the powder. My typical meal would be three cups of powder and 300ml water. It's quite thick that way, my friend used more water.

Comment author: philh 05 August 2015 07:25:12PM *  1 point [-]

When I shake joylent, there always seems to be a small amount of dry powder remaining in the bottom corners of the shaker, no matter how much I shake. I need to poke it with a fork to get it to mix in. Anything else that would work? (The joylent shaker comes with a small ball whisk to keep inside, which might help a bit but doesn't solve the problem.)

Immediate edit: oh! I should try putting in a small amount of water before the powder. Other suggestions also welcome.

Comment author: ZeitPolizei 06 August 2015 08:51:51PM 1 point [-]

When I used home-made soylent, I first put in (all) the water, then the powder. My shaker also has a plastic grid inset in the lid. Putting in the water first also lets you see exactly how much water you have (transparent measuring cup shaker). I don't remember ever having any problems.

Comment author: [deleted] 05 August 2015 02:09:50PM 1 point [-]

When you write the predictions, do you simply add optimism without changing the processes to reach a conclusion, or do you try to map out the "how" of making an outcome match more optimistic outcomes?

In response to comment by [deleted] on Open thread, Aug. 03 - Aug. 09, 2015
Comment author: ZeitPolizei 05 August 2015 04:08:36PM 0 points [-]

Good point, when I wrote down the predictions, I just used my usual unrealistically optimistic estimate of: "This is in principle doable in this time and I want to do it.", i.e. my usual "planning" mode, without considering how often I usually fail to execute my "plans". So in this case, I think I adjusted neither my optimism, nor my plans, I only put my estimate for success into actual numbers for the first time (and hoped that would do the trick).

Comment author: ChristianKl 04 August 2015 08:02:55AM 1 point [-]

I suspect that most people agree, that (if used ethically) autonomous weapons reduce casualties.

What does "if used ethically" mean?

This is a bit like the debate around tasers. Taser seem like a good idea because they allow policeman to use less force. In reality in nearly every case where a policeman wanted to use a real gun in the past they still use a real gun. The do additional shots with the tasers.

The actual question is, how much (more) damage can someone without qualms about ethics do with autonomous weapons, and can we implement policies to minimize the availability of autonomous weapons to people we don't want to have them.

The US is already using it's drones in Pakistan in a way that violates many passages of international law, like shooting at people who rescue wounded people. That's not in line with ethical use. They use the weapons whenever they expect that to produce a military advantage.

Robotics and AI experts aren't experts on politics, and don't know what the actual effects of an autonomous weapon ban would be.

Elon Musk does politics in the sense that he has experience in lobbying for laws getting passed. He likely has people with deeper knowledge on staff.

On the other hand I don't see that the author of the article has political experience.

Comment author: ZeitPolizei 04 August 2015 11:03:03AM 1 point [-]

What does "if used ethically" mean?

I was thinking mainly along the lines of using it in regular combat vs. indiscriminately killing protesters.
Autonomous weapons should eventually be better than humans at (a) hitting targets, thus reducing combatant casualties on the side that uses them and (b) differentiating between combatants and non-combatants, thus reducing civilian casualties. This is working under the assumption, that something like a guard robot would accompany a patrolling squad. Something like a swarm of small drones, that sweep a city to find and subdue all combatants is of course a different matter.

The US is already using it's drones in Pakistan in a way that violates many passages of international law, like shooting at people who rescue wounded people.

I wasn't aware of this, do you have a source on that? Regardless, the number of civilian casualties from drone strikes is definitely too high, from what I know.

View more: Next