MODEL
PATHWAYSEARCH
Overview
Based on last year's Tongji_software team's algorithm, we optimized it and implemented the new search algorithm. At the same time, we established the model and combined the above two algorithms, which greatly improved the search speed of the path.
Previous work
Last year's team abstracted the reaction pathway into a directed graph, searched it with the graph search algorithm depth first search (DFS) algorithm (Fig. 1), and scored the pathway according to the thermodynamic feasibility& precursor competition, toxicity of metabolites and atom conservation. Details are shown in last year's model.
Gif1. Search process of DFS
Optimization of DFS
We optimize the internal implementation logic and other details of the DFS algorithm to reduce its time complexity. We compile it from python script to dynamic link library through Cython module, which makes it run much faster when it is invoked. As can be seen from the following figure (fig.2), after optimization, the running speed of DFS was accelerated by nearly ten times on average.
Fig1. Time consuming between DFS & optimized DFS in different depths
Another choice: BFS
By transforming fig.2 into a logarithmic axis (fig.3), we can find that the operation time of DFS increases exponentially with the increase of search depth. Therefore, we need another algorithm to deal with the search at high search depth. Finally, we implemented a heuristic algorithm: greedy best first algorithm (BFS). BFS is scoring at the same time of searching, and the one with the highest score in the current searched path is selected for further searching each time, so as to quickly reach the local optimal solution (fig.4).
Fig2. Time consuming of DFS exponentially increases by search depth
Gif2. Search process of BFS
Validation of BFS validity
As a heuristic algorithm, the search results of BFS will converge to the local optimal solution instead of the global optimal solution, so we need to verify the accuracy of the search results of BFS.
41,000 random test samples showed that 40,316 of them were correct, i.e. the accuracy rate reached 98.33%. We calculated the search accuracy of BFS under different conditions and converted it into a heat map (Fig. 5). The results show that when users require less needed paths, BFS performs very well, and when users required only 1, the error rate is almost no more than 1%. Overall, the BFS error rate is very low and does not significantly affect user usage.
In addition, in terms of speed, we conducted 20 random tests (Fig. 6) and found that the speed of BFS could be much faster than that of DFS. But larger random tests have shown that the operation time of BFS is very unstable. When there is no pathway between the two compounds, the BFS needs to search data of the same size as DFS. Since the BFS needs to maintain a priority queue, its single-step search takes much longer than DFS, and the operation time of BFS is unacceptably long. So we can't just use BFS instead of DFS.
Fig3. Comparison of time consuming between DFS & BFS
Build a model
We trained a machine learning model, the random forest, to determine which algorithm could perform the search faster before the search and use it to perform the search.
We obtained 41,000 random test samples and extracted 7 features from the input data, respectively : starting compound number, target compound number, search depth, required path number, similarity between starting compound and target compound, number of compounds that can be reached after 2 steps of starting compound search, number of compounds that can be reached after 2 steps of target compound search. We use sigmoid function to map the difference between the running time of the two algorithms to a range of 0 to 1 as labels. Greater than 0.5 indicates that BFS takes longer than DFS, that is, DFS should be used, and vice versa. The advantage of using the sigmoid function is that when the interpolation is large, the tag value tends to 0 or 1, which clearly tells the machine learning model which category the sample should fall into.
The 41,000 test data processed above were used as training data to train the random forest regression model, and the model was graded according to the following formula. That is, the ratio of the total time of the algorithm selected by the model to the total time of the optimal algorithm selected each time. The smaller the ratio, the closer the prediction results are to the optimal value.
Considering the higher stability and accuracy of DFS, the final dividing line of which algorithm will be adjusted to a value that minimizes the scoring function , which is around 0.3.
After the above steps, we trained a random forest regression model and obtained 3000 random test samples to verify it. The results showed that the calculation of these test samples only using DFS took about 10,000 seconds. With BFS alone, it takes about 260,000 seconds; When the best algorithm is used every time, it takes about 7000 seconds. When using the model for prediction, it takes 9,100 seconds. Our model makes our running speed close to the theoretical optimal value of about 30%.
ENZYME SELECTION
Overview
For each enzyme in the pathway by searching or just for one enzyme selection, we can get it’s physical and chemical properties in each organism that exist the selected enzyme. First of all, we make a hypothesis that the users’ experimental engineering bacteria is E.coli or yeast. So we compare all the organism with E.coli or yeast with physical and chemical properties.
After reading paper, we found that there is no standard or formula for this compare methods, and the environment is stable[] in every organism. So we build our enzyme score model by talking about with teachers.
KKM and KM comparison score
After we get many enzyme records, considering the difference of magnitude, we use the standardization of dispersion to calculate the comparison score.
Among this statistic, if there is KM record in database, it will be added to the KM list. If not, we count the KM value from other enzymes that catalytic the same substrate. And get the median as the substitute.
First, we get all KM value and KKM value for selected enzymes as two list. We calculate the KM and KKM position in the interval of KM list and KKM list. And the position distribution is the KKM and KM comparison score.
The way of calculate KKM score is the same.
pH and temperature similarity score
We make a statistic in the data to build a model for the comparison between experimental engineering bacteria and select enzyme.
We statistic every physical and chemical properties of enzyme come from E.coli or yeast. And we make the statistic data as a hist plot. Like the plot show, this distribution is normally distributed. So we use this model to score the similarity between enzyme and host’s environment.
For this statistic, we first combine pHR (pH range) and pHO(pH optinism) together, combine TR(Tempereature range) and TO(Temperature optimism) together, choose the median as the suitable pH and Temperature value for the selected enzymes.
Among this statistic, if there is pH record in database, it will be added to the pH list. If not, we count the KM value from other enzymes that exists in the same organism. And get the median as the substitute. Temperature is the same.
Fig4. pH & temperature distribution in E.coli & yeast environment
Base on this model, we consider the frequency as the similarity score.
Score formula
Considering the priority of each physical and chemical properties, we simulation this formula as the enzyme score model.
After ranking selected enzymes by the score get from this formula, we can get the enzyme selection result for each enzyme.
CODON OPTIMIZATION
Codon termed
In the section of codon optimization, we want to make the enzyme sequence that we get from different organisms have a stable and even a high expression level in our target organism, and meanwhile keep the original codon environment of exogenous gene as possible as we could. So, we abandoned the popular method called “codon harmonization”, using the thinking of random point mutation to instead, only replace specific codons but not almost change them all like what “codon harmonization” do. So, in order to decide which codons could be replaced, we introduce a parameter to represent the relative adaptiveness of a codon termed wi. The wi represents the ratio between the using frequency of the current codon (fi) and the using frequency of the most frequent synonymous codon for that amino acid (max(fj)), and in a way wi represents the codon usage bias of chassis.
GC content
The using frequency of every codon in E.coli and yeast can be accessed from kazusa online database[1] or GenBank[2]. When we get a table of using frequency, a threshold can be set to screen and find codons with low wi score which may cause negative influence on expression. So here we can have a candidate codon list filled by codons which were filtered out in previous step, later we can random choice some of them to be replaced with its synonymous codon which has a higher using frequency. To figure out how many codons should be replaced and to evaluate the probable effect on expression level, we introduce GC content and CAI as our parameters. A sequence’s GC content can be computed from the ratio between the sum of base G and base C and the total number of bases. If the GC content of an exogenous gene is similar to the GC content of target organism’s genome, this exogenous gene is considered to have a better expressing performance in target organism.
Codon Adaptation Index (CAI)
And the Codon Adaptation Index (CAI) is the most widespread technique for analyzing codon usage bias and predicting the level of expression of a gene based on its codon sequence [3]. The range of CAI is between 0 and 1, if its value is closer to 1 which means that sequence may have better expressing level in that target organism. The CAI can be calculated from the geometric mean of the relative adaptiveness of each codon (wi) over the total number of codons (L).
All the preparations have been done, and here comes to codon optimization. First, we generate a candidate sequence list from input sequence by replacing codon in candidate codon list in different random combinations, then we consider both GC content and CAI to filter and rank our candidate sequence list, and finally give the optimized sequence which has best GC content or highest CAI value.
Example
Here we show an example. We use the sequences of acetyl-CoA C-acetyltransferase [EC:2.3.1.9] from different organisms (donors) and try to optimize them to let them have a higher possibility to express better in E.coli. the average GC content of E.coli is 0.58, so we set the target GC content range as [0.574, 0.586]. And the results are as follow.
Fig5. Codon optimization result showed by CAI value (left) & GC content (right)
REFERENCE
[1] C. Brininger, S. Spradlin, L. Cobani, C. Evilia. The more adaptive to change, the more likely you are to survive: Protein adaptation in extremophiles, Seminars in Cell & Developmental Biology, Volume 84,2018, Pages 158-169, ISSN 1084-9521, https://doi.org/10.1016/j.semcdb.2017.12.016.
[2]. http://www.kazusa.or.jp/codon/
[3]. https://www.ncbi.nlm.nih.gov/genbank/
[4]. Sharp, Paul M.; Li, Wen-Hsiung. The codon adaptation index-a measure of directional synonymous codon usage bias, and its potential applications. Nucleic Acids Research. 1987.15 (3): 1281–1295. doi:10.1093/nar/15.3.1281.