Difference between revisions of "Team:SEU/Model"

Line 44: Line 44:
 
   font-size: 20px !important;
 
   font-size: 20px !important;
 
}
 
}
.img-float{
+
.img-float-right{
 
   float:right;
 
   float:right;
 +
}
 +
.img-float-left{
 +
  float: left;
 
}
 
}
 
</style>
 
</style>
Line 88: Line 91:
 
                                           <p>The model shown here is used to construct our <a href="https://2019.igem.org/Team:SEU/Software">Software Tool,</a>which is capable of automatically generating a neuron according to user specifications. </p>
 
                                           <p>The model shown here is used to construct our <a href="https://2019.igem.org/Team:SEU/Software">Software Tool,</a>which is capable of automatically generating a neuron according to user specifications. </p>
 
                                           <h4>1. Input layer</h4>
 
                                           <h4>1. Input layer</h4>
                                           <img src="https://static.igem.org/mediawiki/2019/7/79/T--SEU--inputLayer.png"  class="img-float" width="190">
+
                                           <img src="https://static.igem.org/mediawiki/2019/7/79/T--SEU--inputLayer.png"  class="img-float-left" width="190">
 
                                           <p>Input layer calculates weighted summation of all input data, i.e. it calculates \(\sum_{i=1}^n w_ix_i\) where \(n\) is the number of inputs, \(w_i, x_i\) (\(i=1,2,...n\)) are weights and inputs, respectively. In our implementation, the previously proposed multiplication method is employed. We calculate the positive and negative parts of the result respectively using multiplication.</p>
 
                                           <p>Input layer calculates weighted summation of all input data, i.e. it calculates \(\sum_{i=1}^n w_ix_i\) where \(n\) is the number of inputs, \(w_i, x_i\) (\(i=1,2,...n\)) are weights and inputs, respectively. In our implementation, the previously proposed multiplication method is employed. We calculate the positive and negative parts of the result respectively using multiplication.</p>
 
                                           <h4>2. Activation </h4>
 
                                           <h4>2. Activation </h4>
 +
                                          <img src="https://static.igem.org/mediawiki/2019/c/c7/T--SEU--activation.png"  class="img-float-right" width="190">
 
                                           <p>Activation functions are usually used in neural networks to provide non-linearity. They are non-linear functions and after passing the weighted sum through an activation layer, the result is no longer linear combination of inputs. Widely-used activation functions include sigmoid function \(f(x)=\dfrac{1}{1+e^{-x}}\), rectified linear unit (RELU) \(f(x)=0\) (if \(x<=0\)), \(x\) (if \(x>0\)), etc.</p>
 
                                           <p>Activation functions are usually used in neural networks to provide non-linearity. They are non-linear functions and after passing the weighted sum through an activation layer, the result is no longer linear combination of inputs. Widely-used activation functions include sigmoid function \(f(x)=\dfrac{1}{1+e^{-x}}\), rectified linear unit (RELU) \(f(x)=0\) (if \(x<=0\)), \(x\) (if \(x>0\)), etc.</p>
  
 
                                           <p>From the expression of ReLU, we can easily deduce that only subtraction is needed in this step to generate \(0\) when the weighted sum is negative.</p>
 
                                           <p>From the expression of ReLU, we can easily deduce that only subtraction is needed in this step to generate \(0\) when the weighted sum is negative.</p>
  
 +
                                          <h4>3. Backpropagation</h4>
 +
                                          <img src="https://static.igem.org/mediawiki/2019/d/d1/T--SEU--backpropagation.png"  class="img-float-left" width="190">
 
                                           <h3>References</h3>
 
                                           <h3>References</h3>
 
                                           <p>[1] D. Soloveichik, G. Seelig, E. Winfree, "DNA as a universal substrate for chemical kinetics," Proceedings of the National Academy of Sciences, vol. 107, no. 12, pp. 5393–5398, 2010.</p>
 
                                           <p>[1] D. Soloveichik, G. Seelig, E. Winfree, "DNA as a universal substrate for chemical kinetics," Proceedings of the National Academy of Sciences, vol. 107, no. 12, pp. 5393–5398, 2010.</p>

Revision as of 08:08, 4 October 2019





Model

Computation Model

In this part, we provide our computation model in chemical reactions and prove the validity via kinetic analysis.

In our system, numerical values are represented by concentrations of certain species. Chemical reactions are simplified as formal reactions such as \(A+B \xrightarrow{k} C\). A formal reaction consists of reactants, products (A, B and C) and rate constant (\(k\)). For each computation operation, given initial concentrations of certain species as inputs, the outputs are representd by the concentration of another species in the system at the end of the reaction.

According to [1], formal reactions can be mapped to DNA strand displacement (DSD) reactions [2] without losing the kinetic features of the reaction. We borrow such a model in our project to implement calculation operations.

As only addition, subtraction and multiplication are required in our project, we only provide the implementation of such computation. For each calculation operation, we firstly provide formal reactions, then provide kinetic analysis and finally propose DSD revaction implementation. The analysis in our model is totally based on classic mass action kinetics.

Addition:

To implement addition, we utilize:
\(A_1 \xrightarrow{k_1} O,\quad A_2 \xrightarrow{k_2} O,... \quad A_n \xrightarrow{k_n} O\). Initial concentrations of reactants \(A_i\) of such reactions are considered inputs and the final concentration of product \(O\) represent our result. Such reactions calculate \([O](\infty)=\sum_{i=1}^n[A_i](0)\).

Proof:

\(\dfrac{d [A_i](t)}{d t}=-k_i[A_i](t) (i=1,2...n)\) \(\Rightarrow [A_i](t)=[A_i](0)e^{-k_it} (i=1,2...n)\), \(\dfrac{d [O](t)}{d t}=\sum_{i=1}^n k_i[A_i](t)\) \(\Rightarrow [O](t)=-(\sum_{i=1}^n[A_i](0)e^{-k_it})+\sum_{i=1}^n[A_i](0)\) \(\Rightarrow [O](\infty)=\sum_{i=1}^n[A_i](0)\). Thus addition is successfully implemented.

The DSD implementation:

Subtraction:

To build a subtractor, we use \(A+B \xrightarrow{k_1} \phi\). The result is the final concentration of A or B, depending on the initial concentration of A and B.

Proof:

Apparently, \([A](t)=[B](t)+\Delta \).
If \(\Delta \neq 0\), \(\dfrac{d [A](t)}{d t}=-[A](t)([A](t)-\Delta)\) \(\Rightarrow [A](t)=\dfrac{[A](0)\Delta}{-[A](0)+[A](0)e^{\Delta t}+\Delta e^{\Delta t}} (\Delta \neq 0).\) If \(\Delta > 0\), \([A](\infty)=\Delta\). Otherwise \([A](\infty)=0\).
If \(\Delta =0\), \([A](t)=\dfrac{[A](0)}{1+[A](0)t}\). \([A](\infty)=0\). Hence substraction is implemented.

The DSD implementation:

Multiplication:

To calculate concentration multiplication, we utilize \(\alpha \xrightarrow{k_1} \phi, A+B+\alpha \xrightarrow{k_2} A+B+\alpha+C\). It calculates \([C](\infty)=[A](0)\times[B](0)\).

Proof:

\(\dfrac{d [\alpha](t)}{d t}=-k_1[\alpha](t)\) \(\Rightarrow [\alpha](t)=[\alpha](0)e^{-k_1t},\) \(\dfrac{d [A](t)}{d t}=\dfrac{d [B](t)}{d t}=0, \dfrac{d [C](t)}{d t}=k_2[A](t)[B](t)[\alpha](t)\) \(\Rightarrow [C](\infty)=\int_0^\infty [A](0)[B](0)[\alpha](t)=k_2/k_1[\alpha](0)[A](0)[B](0)\). Hence multiplication is implemented.

The DSD implementation:

Compared to the formal reactions, there are some minor changes:
1. \(\alpha\) is canceled to reduce the number of reactants.
2. One of the reactants will be consumed.

Neuron Implementation

The model shown here is used to construct our Software Tool,which is capable of automatically generating a neuron according to user specifications.

1. Input layer

Input layer calculates weighted summation of all input data, i.e. it calculates \(\sum_{i=1}^n w_ix_i\) where \(n\) is the number of inputs, \(w_i, x_i\) (\(i=1,2,...n\)) are weights and inputs, respectively. In our implementation, the previously proposed multiplication method is employed. We calculate the positive and negative parts of the result respectively using multiplication.

2. Activation

Activation functions are usually used in neural networks to provide non-linearity. They are non-linear functions and after passing the weighted sum through an activation layer, the result is no longer linear combination of inputs. Widely-used activation functions include sigmoid function \(f(x)=\dfrac{1}{1+e^{-x}}\), rectified linear unit (RELU) \(f(x)=0\) (if \(x<=0\)), \(x\) (if \(x>0\)), etc.

From the expression of ReLU, we can easily deduce that only subtraction is needed in this step to generate \(0\) when the weighted sum is negative.

3. Backpropagation

References

[1] D. Soloveichik, G. Seelig, E. Winfree, "DNA as a universal substrate for chemical kinetics," Proceedings of the National Academy of Sciences, vol. 107, no. 12, pp. 5393–5398, 2010.

[2] DNA Strand Displacement.