Skip to content
Snippets Groups Projects
Commit 0fe87d61 authored by Tu, Ethan's avatar Tu, Ethan
Browse files

Upload Graph theory report

parent 2f0cd224
No related branches found
No related tags found
No related merge requests found
%% Cell type:markdown id: tags:
# <center>Using Graph Theory in Neural Network Optimization</center>
<center>by Ethan Tu</center>
%% Cell type:markdown id: tags:
The purpose of a graph is to map out the relationships between a group of objects. A graph can be constructed by connecting nodes together by a line. The nodes, commonly called vertices, represent individual objects or states, while the lines, commonly called edges, represent the relationship between any two objects. The graph can be as simple a single node with no edges. For example, if we were to map out my friendships with other people as a graph, we would have one node (representing me) and no other edges or nodes (representing the fact I have no friends). As graphs become more complicated, with multiple edges per node, weighted edges, directed or undirected, new algorithms for searching, traversing, adding nodes, merging branches, etc, need to be developed. In short, any problem that can be represented as a graph can be solved using ideas from graph theory.
For my project, I want to describe the perfusion of a drug into blood and the surrounding tissue as it travels down a vessel. There are various models used to describe this perfusion, which I have mentioned before in previous reports. While such models are not normally though of as graphs, they are. Compartmental modeling is, in essence, drawing a weighted, directed graph where compartments are the nodes and the substance exchange between compartments are the edges. The edges are weighted because the relationship between compartments are normally dictated by a rate constant K. However, in compartmental modeling, we always want to minimize the complexity by reducing the number of compartments to as few as possible. Adding another compartment may be more biologically representative, but if a similar accuracy with a less complex model can be achieved, then general wisdom is to use the less complex model. For example, if we were to create a compartmental model for our perfusion kinetics problem, we might go anywhere between 1 and 6 compartments. The least complex is a single compartment representing the blood, where we would only care about the rate of efflux of the drug out of it. At the most complex, we could have blood, epithelial layer, tunica intima, tunica media, tunica externa, and interstitial fluid each being its own compartment and multiple ODEs representing the efflux and influx of substance between each layer. Yes, the 6-compartment model is better mimics the biological anatomy of our blood vessels, but unless it was vastly more accurate than lower complexity models, it would never be used. Generally, research into this area have settled on using 2 or 3 compartment models, with general variations in ODEs. However, since the graph is so small (3 nodes), most researchers don't need to apply graph theory algorithms to solving such models. Instead, graph theory is used in the optimization of our models in fitting it to data.
I have mentioned before in the optimizations report that perfusion kinetics models aim to minimize the least squares sum between the model's curve and experimental data. The curve is dictated by unknown parameters such as rate constants K, permeability-surface area constant PS, volume of blood Vb, volume of interstitial fluid Visf, or flow F.[1] The optimal values of these unknown variables to get the smallest least squares sum can be found a number of different ways, including artificial neural networks. A neural network is, in essence, a multipartite graph.[2] Basically, there are "layers" of nodes that have directed edges linking them to the next layer but does not link to any other node within the same layer.
<img src="https://upload.wikimedia.org/wikipedia/commons/thumb/4/46/Colored_neural_network.svg/800px-Colored_neural_network.svg.png" width="50%">
<p style="text-align: right;">Image from:https://en.wikipedia.org/wiki/Artificial_neural_network</p>
Here we see an input layer feeding into a hidden layer which outputs to an output layer. This is the most basic form of an artificial neural network. We can train this network to output the set of parameters that best fits a data set by adjusting the weights of the edges between each layer. The adjustment to the weights is calculated by using a loss function, which basically calculates error. In this scenario, our loss function is our least squares equation. To change the weights of each layer, we go through a process called back propagation. Back propagation starts from the output layer, works backwards toward the input layer, and distributes the adjustments accordingly. Let’s go back to our perfusion kinetic example. We can take each unknown parameter and treat it as a node in our input layer. The network feeds this input into the hidden layer, which will take in the weighted inputs and produce an output (our curve). A loss function (least squares sum) is calculated, and depending on the results of the loss function, we adjust the weights (vales) of each unknown variable accordingly. Plainly speaking, the neural network will hopefully output Vb, F, Visf, F, and PS values close to the real biological value of our subject.
Graph theory is a powerful set of tools, algorithms, and problems that can be applied to many different fields. In this project, I have started optimization for my unknown variables using the simplex algorithm. However, if time allows, I might try to optimize my models using neural networks too.
%% Cell type:markdown id: tags:
---
# References
[1] Alessio Adam M., Bindschadler Michael, Busey Janet M., Shuman William P., Caldwell James H., and Branch Kelley R., “Accuracy of Myocardial Blood Flow Estimation From Dynamic Contrast-Enhanced Cardiac CT Compared With PET,” Circulation: Cardiovascular Imaging, vol. 12, no. 6, p. e008323, Jun. 2019, doi: 10.1161/CIRCIMAGING.118.008323.
[2] F. Tong, “Graph Theory and Deep Learning know-hows,” Medium, 20-May-2019. [Online]. Available: https://towardsdatascience.com/graph-theory-and-deep-learning-know-hows-6556b0e9891b. [Accessed: 07-Feb-2020].
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment