Mixing Logic Programming and Neural Networks to Support Neurological Disorders Analysis
Francesco Calimeri, Francesco Cauteruccio, Luca Cinelli, Aldo Marzullo, Claudio Stamile and Giorgio Terracina.
Full results page for the paper submitted to Theory and Practice of Logic Programming (TPLP).
Encodings (normal):
Clique problem. Top ↑
% input: facts of the form "node(X)" and "edge(X,Y,W)" as the input graph
% input: facts of the form "dc(X,Y,D)" as degradation coefficients for all edges
% input: support atom "zero(0)"
% guess the clique
{ clique(X) } :- node(X).
% take into account only active edges
activeEdge(X,Y) :- edge(X,Y,W), W>0.
:- clique(X), clique(Y), X < Y, not activeEdge(X,Y).
% maximize clique cardinality
:~ node(X), not clique(X). [1@1,X]
% edges to be modifed in the new graph
e(X,Y,W) :- edge(X,Y,W), clique(X), clique(Y).
% define the new graph
edge1(X,Y,W) :- edge(X,Y,W), not e(X,Y,W).
edge1(X,Y,NW) :- e(X,Y,W), #max{K : e(X,Y,W), dc(X,Y,D), K=W-D ; 0 : zero(0)} = NW.
Vertex Cover problem. Top ↑
{ vertexcover(X) } :- node(X).
activeEdge(X,Y) :- edge(X,Y,W), W>0.
:- activeEdge(X,Y), not vertexcover(X), not vertexcover(Y), X < Y.
:~ vertexcover(X). [1@1,X]
e(X,Y,W) :- edge(X,Y,W), vertexcover(X), vertexcover(Y).
edge1(X,Y,W) :- edge(X,Y,W), not e(X,Y,W).
edge1(X,Y,NW) :- e(X,Y,W), #max{K : e(X,Y,W), dc(X,Y,D), K=W-D ; 0 : zero(0)} = NW.
\(k\)-hub problem. Top ↑
activeEdge(X,Y) :- edge(X,Y,W), W>0.
degree(N,X) :- #count{X,Y: activeEdge(X,Y)} = N, node(X).
maxDegree(L) :- #max{N: degree(N,_)} = L.
topk(N,G,1) :- node(N), maxDegree(G), degree(N,G).
topk(N,G,P) :- topk(N1,G1,P-1), degree(N,G), G < G1, not inBetween(N1,G1,N,G), P < K, khub(K).
inBetween(N1,G1,N,G) :- degree(N1,G1), degree(N,G), degree(N2,G2), N1 != N, N2 != N, G1 > G2, G2 > G.
e(X,Y,W) :- edge(X,Y,W), topk(X,_,_).
e(X,Y,W) :- edge(X,Y,W), topk(Y,_,_).
edge1(X,Y,W) :- edge(X,Y,W), not e(X,Y,W).
edge1(X,Y,NW) :- e(X,Y,W), #max{K : e(X,Y,W), dc(X,Y,D), K=W-D ; 0 : zero(0)} = NW.
Independent Set problem. Top ↑
{ independentset(X) } :- node(X).
activeEdge(X,Y) :- edge(X,Y,W), W>0.
:- activeEdge(X,Y), independentset(X), independentset(Y), X < Y.
:~ node(X), not independentset(X). [1@1,X]
e(X,Y,W) :- edge(X,Y,W), independentset(X), not independentset(Y).
edge1(X,Y,W) :- edge(X,Y,W), not e(X,Y,W).
edge1(X,Y,NW) :- e(X,Y,W), #max{K : e(X,Y,W), dc(X,Y,D), K=W-D ; 0 : zero(0)} = NW.
Max Degree node problem. Top ↑
activeEdge(X,Y) :- edge(X,Y,W), W>0.
degree(N,X) :- #count{X,Y: activeEdge(X,Y)} = N, node(X).
maxDegree(L) :- #max{N: degree(N,_)} = L.
nodeMaxDegree(X) :- degree(N,X), maxDegree(N).
e(X,Y,W) :- edge(X,Y,W), nodeMaxDegree(X).
edge1(X,Y,W) :- edge(X,Y,W), not e(X,Y,W).
edge1(X,Y,NW) :- e(X,Y,W), #max{K : e(X,Y,W), dc(X,Y,D), K=W-D ; 0 : zero(0)} = NW.
ASP encodings (maximizing the use of important/not important edges):
Clique problem. Top ↑
% input: facts of the form "node(X)" and "edge(X,Y,W)" as the input graph
% input: facts of the form "imp(X,Y,P)" denoting the importance of edge(X,Y,W)
% input: a fact of the form Th(T) used to discriminate between important and not important edges
% guess the clique
{ clique(X) } :- node(X).
% take into account only active edges
% if P <= T not important edges are in the clique,
% if P >= T important edges are in the clique
activeEdge(X,Y) :- edge(X,Y,W), W>0, imp(X,Y,P), Th(T), P <= T.
:- clique(X), clique(Y), X < Y, not activeEdge(X,Y).
% maximize clique cardinality
:~ node(X), not clique(X). [1@1,X]
% edges to be modifed in the new graph
e(X,Y,W) :- edge(X,Y,W), clique(X), clique(Y).
% define the new graph
edge1(X,Y,W) :- edge(X,Y,W), not e(X,Y,W).
edge1(X,Y,0) :- e(X,Y,W).
ASP encodings (decrease of a graph metric maximizing important/not important edges):
Decrease of a graph metric.Top ↑
% input: facts of the form "node(X)" and "edge(X,Y,W)" as the input graph
% compute the starting value of the metric and the corresponding threshold
target(X) :- &computeMetric[activeEdge](Y), X=Y/90.
% take into account only active edges
activeEdge(X,Y) :- edge(X,Y,W), W>0.
% guess the new graph
0 < { in(X,Y): activeEdge(X,Y) }.
% check that the goal is reached
:- &computeMetric[in](X), target(K), X > K.
:~ activeEdge(X,Y), not in(X,Y). [1@1,X,Y]
% define the new graph
edge1(X,Y,W) :- in(X,Y), edge(X,Y,W).
edge1(X,Y,0) :- edge(X,Y,W), not in(X,Y).
Figures:
Structural properties, damage \(p = 50\%\).


Average probability values for each iteration \(i = 0..4\) for the logic program Clique. For each stage, framework results (left) and random test (right) are reported.
Top ↑

Average probability values for each iteration \(i = 0..4\) for the logic program Vertex Cover. For each stage, framework results (left) and random test (right) are reported.
Top ↑

Average probability values for each iteration \(i = 0..4\) for the logic program \(k\)-hub. For each stage, framework results (left) and random test (right) are reported.
Top ↑

Average probability values for each iteration \(i = 0..4\) for the logic program Independent Set. For each stage, framework results (left) and random test (right) are reported.
Top ↑

Average probability values for each iteration \(i = 0..4\) for the logic program Max Degree node. For each stage, framework results (left) and random test (right) are reported.
Top ↑Structural properties, damage \(p = 25\%\).


Average probability values for each iteration \(i = 0..4\) for the logic program Clique. For each stage, framework results (left) and random test (right) are reported.
Top ↑

Average probability values for each iteration \(i = 0..4\) for the logic program Vertex Cover. For each stage, framework results (left) and random test (right) are reported.
Top ↑

Average probability values for each iteration \(i = 0..4\) for the logic program \(k\)-hub. For each stage, framework results (left) and random test (right) are reported.
Top ↑

Average probability values for each iteration \(i = 0..4\) for the logic program Independent Set. For each stage, framework results (left) and random test (right) are reported.
Top ↑

Average probability values for each iteration \(i = 0..4\) for the logic program Max Degree node. For each stage, framework results (left) and random test (right) are reported.
Top ↑Graph metrics, decrease of \(10\%\) at each iteration.


Results obtained for a decrease of 10% (at each iteration of the framework) of the graph metric Density. Results for all stages (left) and detail for CIS transitions (right).
Top ↑
Results obtained for a decrease of 10% (at each iteration of the framework) of the graph metric Assortativity.
Top ↑Using ANN insights.

Variation of classification results removing less important edges.
Top ↑
Analyzing the structural property Clique altering not important edges.
Top ↑
Analyzing the graph metric Density altering important edges.
Top ↑