christianvoigt/argdown

Probabilistic argument mapping and undercut relations

6-AND-9 opened this issue · 3 comments

Hi there Christian,

I really like Argdown and I plan on using it in the future a lot. In general, it works great regarding argument mapping. I would like it to explicitly show the undercut/rebuff/undermine relations, but these can be implicitly derived from the structure anyway.

What I would really look forward to is the implementation of probabilistic argument mapping. Currently the relations are 3-fold: positive support (+), negative support (-), is strictly contrary to (><) (if I am missing something please let me know, I am just starting with Argdown).

However, real argument building in our minds happens more in a probabilistic manner. An example would be where a conclusion (inference) is supported by three 100%-strong statements/nodes, and negatively supported by one 100%-strong statement, which would render the support for the conclusion at 75%.

Can this be implemented and do you have any plans doing so?

I am a philosopher, so can't help with the technical stuff. However, if I would go about this would be:

  • implement ability to pre-define nodes as the either of two types: fuzzy (rational number between 0 and 1; where 0.5 would be the middle point) and strict (extended boolean: either 0, 0.5 or 1);
  • fuzzy and strict nodes would have strength that reflects the support they get from other statements that somehow (positively or negatively) relate to them;
  • if there are no such other statements that lead to them, nodes will have a default strength value of 1 (since if one is including a node in his argument map, it is a fully-strength one anyway), unless one explicitly redefines the strength value;
  • strict nodes are 1 if their support exceeds 0.5, 0 if it is below 0.5, 0.5 if it is exactly 0.5; a range of tolerance around 0.5 i.e. 1% might be acceptable here as well as a property that can be enabled;
  • the strength of the supporting or non-supporting relation would be reflective of the strength of the node; positive support is straightforwardly reflective of the strength of the node, negative support is reflective of the node strength multiplied by -1;

Example:

Statement 1a: strength = 0.75
Statement 1b: strength = 0.8
Statement 1c: strength = 0.4

Conclusion 1, positively supported by 1a and 1b, but negatively supported by 1c: strength = 0.6917 (rounded)

Why?

Because 0.75 + 0.8 + (0.4 * -1) = 1.15
Then we derive the average: 1.15 / 3 = 0.3833
Finally we add 1 (to retrieve the value out from the negative range) and divide with 2 (to render it within the range of 0 and 1): (0.3833 + 1) / 2 = 1.3833 / 2 = 0.691666667

Let me know what you think. This was out of the top of my head, the formulae might be wrong. And let me know if I can help in any way i.e. by donating, conceptualising, etc.

Hi, thanks for your interest in Argdown. Before I get to your main topic, just a quick tip: Argdown already has an undercut relation (<_). Here is how you use it:

<a>
    _> <b>

<a>

(1) s1
(2) s2
-----
    <_ [t1]
(3) s3

Now, regarding "probabilistic" inferences. In general Argdown is not restricted to deductive inferences. Either you can use non-deductive inference patterns or you can make the uncertainty explicit in your statements ("It is probable that", "It is possible that", or "It is scientific consensus that..." and so on). I think the latter method has many advantages as you can still use deductive logic and the statements are expressing much clearer what is actually under discussion.

However, what you are interested in is the quantification of "how well a statement is justified" in a debate. So I think what you are talking about are "degrees of justification" which is a topic Gregor Betz (whose Debatelab is funding Argdown development) has written a lot about. Check out this article for example.

Adding something like this to Argdown is possible without changing the syntax. It is one of the many purposes of statement/argument metadata. You can add any data you want to statements (another typical use case would be formalizations):

[Man-made]: Climate change is man-made { doj: 0.9 }

To use this kind of data for automatic calculations Argdown users can write custom plugins that read and transform data (including metadata) (in the case of formalizations one could use an automatic proof checker). I will gladly answer any questions by plugin developers.

One thing that is still missing though, is the ability of plugins to enhance or format the original Argdown document (for example adding calculation results back to the metadata). At the moment the output of an Argdown data transformation is always in another format (an "export") and not reintegrated into your code. That is something that is on my long-term todo list.

Such a "degree of justification"-plugin should not be part of the core of Argdown as this is a controversial area of research and Argdown tries hard to not be theoretically opiniated. But it would definitely be a fascinating experimental feature!

Hey Christian,

Thanks for your reply.

First, let me ask you about the undercut you mentioned. As I can see the undercut defeats a node, whereas it should defeat the relationship between two nodes (on the visual map) i.e. the undercut should attack the inference. Yes, it is marked with purple, but it still points to a node instead of the relationship arrow. That means that it defeats the node which might be pointing to additional nodes through inferences that are valid. Do you have any plans on changing/amending this?

Now, on probabilistic inferences. The ability to add any metadata is great, and I believe this can be used to define the strengths. However, there should be a reasoning engine (a proof checker, as you say) that will perform calculations that perpetuate down the chain, and update the graph on any updates. I showed above that calculations in this manner are not very complex. They need to start from the bottom of the tree and propagate to the top, though, and if the tree is complex this can have computational implications (but not something undoable by today's PCs for sure).

I don't know who can do this, but it'd be great if this can be done. If only I had the time to learn Typescript now :D

Anyway, I agree that this would be a fascinating experimental feature. Hope someone will step in and do it. Or maybe you can do it and sell this plugin, as a way of self-financing. I would be sure to purchase it.

First, let me ask you about the undercut you mentioned. As I can see the undercut defeats a node, whereas it should defeat the relationship between two nodes (on the visual map) i.e. the undercut should attack the inference. Yes, it is marked with purple, but it still points to a node instead of the relationship arrow. That means that it defeats the node which might be pointing to additional nodes through inferences that are valid. Do you have any plans on changing/amending this?

This is based on a misunderstanding of Argdown argument maps (which differ from other kinds of argument maps): In other argument maps, arguments simply link a set of premises together and their outgoing arrows represent the inference to the conclusion. In contrast, in Argdown inferences and conclusions are contained within (logically reconstructed) arguments, as each argument consists of (possibly complex) premise-conclusion structures. An argument can contain several inferential steps and thus actually can represent a whole inference tree.

We do this for two reasons: First, as philosphers, we are used to reconstruct arguments as lists of statements representing inference trees and we wanted to keep true to this tradition of argument reconstruction. Secondly, allowing arguments to contain complex inference trees allows us to drastically reduce the visual complexity of argument maps. This makes it possible to visualize huge debates.

Undercuts actually attack the inferential steps within arguments:

(1) first premise
(2) second premise
----
    <_ undercut against this inference
(3) intermediary conclusion
(4) third premise
----
(5) main conclusion

A purple arrow in the argument map between two arguments means that the attacking argument attacks an inferential step inside of that argument's premise-conclusion structure (in contrast to a "normal" attack, which attacks a premise within the argument).

A green arrow between an argument and a statement actually does not represent the inference from that argument's premises to its conclusion. Instead it means that if the argument's conclusion is true, the supported statement has to be true as well (the conclusion logically-semantically entails the statement).

A green arrow from argument a to argument b means that if the conclusion of argument a is true, a premise of argument b has to be true as well. See here for more information on what the arrows mean in Argdown.

You can not attack a green arrow, as it should only be used for entailment relations that are necessarily true based on the meaning of the used statements. Deductive or non-deductive inferences (which can be attacked with undercuts) are instead added in the logical reconstructions of arguments, not by "adding arrows".

This all makes more sense in "strict" mode where each argument is logically reconstructed and all relations can be derived from logico-semantical relations between statements. If you do not reconstruct premise-conclusion structures this might indeed seem confusing at first. But in this case you should think of the relations as simply making first assumptions (sketching out) what logico-semantical relations would exist if one were to fully reconstruct all premise-conclusion structures.