Line 72: | Line 72: | ||
<h1>Improve</h1> | <h1>Improve</h1> | ||
<p>This is our first time to participate in iGEM. According to our survey, there is few similar projects in previous iGEM competition hence our focus in this page lies in improving state-of-the-art molecular computing neural networks in literature [1].</p> | <p>This is our first time to participate in iGEM. According to our survey, there is few similar projects in previous iGEM competition hence our focus in this page lies in improving state-of-the-art molecular computing neural networks in literature [1].</p> | ||
− | <p>In the previous papar, inputs to neural networks are binary i.e. the values are either 0 or 1, which | + | <p>In the previous papar, inputs to neural networks are binary i.e. the values are either 0 or 1, which limits the application. In computer science applications, data are usually real numbers. If we are able to support real value calculation, CPU/GPU-based software applications can be possibly mapped. Also, more importantly, training is not integrated to the system. It is one critical part of neural networks, and totally relying on <i>in silico</i> training cannot fully exploit the potential of molecular computing. Hence we make such improvements:</p> |
<p>1. Employ continuous values during computation.</p> | <p>1. Employ continuous values during computation.</p> | ||
<p>2. Integrate training to our neuron.</p> | <p>2. Integrate training to our neuron.</p> |
Revision as of 13:18, 18 October 2019
Improve
This is our first time to participate in iGEM. According to our survey, there is few similar projects in previous iGEM competition hence our focus in this page lies in improving state-of-the-art molecular computing neural networks in literature [1].
In the previous papar, inputs to neural networks are binary i.e. the values are either 0 or 1, which limits the application. In computer science applications, data are usually real numbers. If we are able to support real value calculation, CPU/GPU-based software applications can be possibly mapped. Also, more importantly, training is not integrated to the system. It is one critical part of neural networks, and totally relying on in silico training cannot fully exploit the potential of molecular computing. Hence we make such improvements:
1. Employ continuous values during computation.
2. Integrate training to our neuron.
Also, we develop a Software Toolto help other researchers design their own systems.
Reference
[1] K. Cherry, L. Qian, "Scaling up molecular pattern recognition with DNA-based winner-take-all neural networks," Nature, vol. 559, no. 7714, pp.370-376, 2018.