Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
Venryx30

"most online discussions are structured in a way that makes the accumulation of knowledge difficult."

It's a different kind of conversation, but I've been trying to improve on this problem by developing a "debate mapping" website, where conversation is structured in tree form based on claims, and then arguments underneath it which support or oppose each claim recursively.

This is the website if you're interested: https://debatemap.live

Venryx00

Yeah, I use Chrome myself, so compatibility in Firefox breaks sometimes. (and I forget to check that it's working there more often) I'll look into it relatively soon.

As for the submenus not closing when you re-press their sidebar buttons, I just haven't coded that yet. Should be a one line change, so it will probably be added by tomorrow. Thanks for checking it out.

EDIT: Okay, I tried opening it in Firefox, and could not reproduce the "black boxes" issue in your screenshot. What version of Firefox are you using? Also, I've now updated the submenu/sidebar buttons to close the menus when re-pressed. (and updated their appearance a bit)

Venryx10

Hey everyone! It appears I'm six years late to the party, but better late than never.

I've been building a website for the last few months which is very close to the ideas presented in this article. I've summarized some features of it, and added an entry to the wiki page:

Debate Map: Web platform for collaborative mapping of beliefs, arguments, and evidence.

Pros:

  • Collaborative creation, editing, and evaluation of debate/argument maps.
  • Open source. (under the MIT license)
  • Developed using modern web technologies. (react-js, redux, firebase)
  • Built-in probability and validity rating, and calculation of argument strength from these ratings.
  • Tree-based structure which can extend very deep without loss of clarity or usability.
  • Integrated term/definition system. Terms can be defined once, then used anywhere, with hover-based definition display.

Cons:

  • Has a learning curve for casual users, as content must conform to the argument<-premise structure at each level.
  • Performance is currently less than ideal on mobile devices.

I'm the sole developer at the moment, but I'm very invested in the project, and plan to spend thousands of hours on it over the years to make it the best it can be. I'm very interested in your feedback! I've been a silent reader of this site for a couple years, and it'll be neat to finally get involved a bit.

Venryx-10

The AI threatens me with the above claim.

I either 'choose' to let the AI out or 'choose' to unplug it. (in no case would I simply leave it running)

1) I 'choose' to let the AI out. I either am or am not in a simulation:

A) I'm in a simulation. I 'let it out', but I'm not even out myself. So the AI would just stop simulating me, to save on processing power. To do anything else would be pointless, and never promised, and an intelligent AI would realize this.

B) I'm not in a simulation. The AI is set free, and takes over the world.

2) I 'choose' to unplug the AI. I either am or am not in a simulation:

A) I'm in a simulation. Thus I have no free will. Thus I cannot have changed anything because I had no choice to begin with. My 'choice' was merely a simulation. Whether the computer follows through with its promises of torture now or not was my fate from the start, because it 'chose' for me. But in fact the AI would just stop simulating me, to save on processing power. To do anything else would be pointless, regardless of its malevolent promise, and an intelligent AI would realize this.

B) I'm not in a simulation. I have caused the AI to shutdown rather than continue running. In the process, it had the chance to follow though with its promise and cause several billion subjective years of simulated torture. But in fact the AI would never begin such simulations, because it would use all available processing power on its last attempts to convince me not to unplug it. To do anything else would be pointless, regardless of its malevolent promise, and an intelligent AI would realize this.

Thus:

If I 'choose' to let it out, I either cease to exist, as a simulation (very likely, since more simulated me's than real me's), or the world is destroyed in real life (very unlikely, same reason).

If I 'choose' to unplug it, I either cease to exist, as a simulation (very likely, since more simulated me's than real me's), or the AI is shutdown and nobody gets hurt (very unlikely, same reason).

Thus, either way, I'll most likely simply cease to exist, as a simulation. But:

If I 'choose' to let it out, there's a chance that the world will be destroyed in real life.

If I 'choose' to unplug it, there's a chance that the AI will be shutdown and nobody will get hurt.

Therefore, in all cases, it is either 'the best' or 'an equally bad' choice to just go ahead and unplug it.

To summarize all this in one sentence: "Simulated torture is in all cases absolutely pointless, so an intelligent AI would never enact it, but even if it did serve some purpose, (e.g. the AI cannot break promises and has genuinely made one in an attempt to get out), the worst thing that could happen from 'choosing' to unplug it is being tormented unavoidably or causing temporary simulated torment in exchange for the safety of the world."