Difference between revisions of "Team:Tsinghua-A/Feature Extraction"

 
(One intermediate revision by one other user not shown)
Line 1: Line 1:
 
{{Tsinghua-A}}
 
{{Tsinghua-A}}
 +
 
<html>
 
<html>
 
<head>
 
<head>
<title>Test</title>
 
 
<link rel="stylesheet" href="https://2019.igem.org/Template:Tsinghua-A/CSSTest?action=raw&ctype=text/css">
 
<link rel="stylesheet" href="https://2019.igem.org/Template:Tsinghua-A/CSSTest?action=raw&ctype=text/css">
 
</head>
 
</head>
Line 10: Line 10:
 
<center style="height: 100%;">
 
<center style="height: 100%;">
 
<div class="top_pic" style="height: 100%;margin-top: 0;">
 
<div class="top_pic" style="height: 100%;margin-top: 0;">
<div style="background-image: url(https://static.igem.org/mediawiki/2019/4/44/T--Tsinghua-A--project-main.jpg);height: 100%;
+
<div style="background-image: url(https://static.igem.org/mediawiki/2019/a/a4/T--Tsinghua-A--MAIN4.png);height: 100%;
 
     width: 100%;background-attachment: fixed;background-size: cover;">
 
     width: 100%;background-attachment: fixed;background-size: cover;">
 
     </div>
 
     </div>
Line 21: Line 21:
 
<div class="row">
 
<div class="row">
 
<div class="right_part">
 
<div class="right_part">
<a name="Feature Extraction"></a>
+
<a name="Overview"></a>
<h3>Overview</h3>
+
<h3>Overview</h3>
<p>Image similarity search is widely applied both in industry and in our daily life, which helps us retrieve enormous amount of images from the Internet in no time. The early versions of image similarity search algorithms used in search engines are usually based a non-neural network method, known as Perceptual Hashing Algorithm. It computes a digital fingerprint of each image by image normalization, color simplification, gray scale comparison, hash value calculation and so on. This hash-based algorithm measures image similarity by comparing the fingerprint of different pictures.</p>
+
<p>Image similarity search is widely applied both in industry and in our daily life, which helps us retrieve enormous amount of images from the Internet in no time. The early versions of image similarity search algorithms used in search engines are usually based a non-neural network method, known as Perceptual Hashing Algorithm. It computes a digital fingerprint of each image by image normalization, color simplification, gray scale comparison, hash value calculation and so on. This hash-based algorithm measures image similarity by comparing the fingerprint of different pictures.</p>
<p>In our project, there are a certain similarity between retrieval in DNA storage library and data retrieval in the server. These two kinds of retrieval both require feature extraction from pictures as a measurement of image similarity.  Based on their comparability,, two specific problems needs to be solved:</p>
+
<p>In our project, there are a certain similarity between retrieval in DNA storage library and data retrieval in the server. These two kinds of retrieval both require feature extraction from pictures as a measurement of image similarity.  Based on their comparability,, two specific problems needs to be solved:</p>
<ul>
+
<ul>
<li><strong>Image feature extraction:</strong></li>
+
<li><strong>Image feature extraction:</strong></li>
 
+
</ul>
+
<p>At present, there are many existing neural networks that can effectively handle image feature extraction. Here in this project, we choose pre-trained VGG16 network and Principal Component Analysis to perform feature extraction on Caltech-256 database, and validate the feasibility of this method as well as determine the length of the final retained feature vector.</p>
+
<ul>
+
<li><strong>Similarity criteria</strong></li>
+
 
+
</ul>
+
<p>In feature space, there are multiple methods to measure similarity between feature vectors, such as Euclidean distance, Cosine distance and Hamming distance. Through experiments, we find that cosine distance has the highest accuracy and stability under various experimental conditions (including different feature dimensions and image types), therefore we finally chose cosine distance as the ultimate measure of similarity.</p>
+
<h3>Algorithm structure &amp; principles</h3>
+
<ul>
+
<li><strong>VGG16 Network</strong></li>
+
</ul>
+
+
<div class="demo" style="text-align: center;">
+
<img src="https://static.igem.org/mediawiki/2019/f/f7/T--Tsinghua-A--vgg16.png">
+
</div>
+
 
+
<p>  VGG16 is a convolutional neural network model proposed by K. Simonyan and A. Zisserman from the University of Oxford in the paper “Very Deep Convolutional Networks for Large-Scale Image Recognition”. The model achieves 92.7% top-5 test accuracy in ImageNet, which is a dataset of over 14 million images belonging to 1000 classes.<sup class='md-footnote'><a href='#dfref-footnote-1' name='ref-footnote-1'>1</a></sup></p>
+
<p> In this project, we utilize VGG16&#39;s &#39;intermediate product&#39;: the output of ReLU layer, which is a 1*1*4096 vector. We then tested two schemes to further reduce the size of feature vector: the first one is adding another layer of neural network, which is similarly with the original VGG16 shown above but with adjustable vector length; the second one performs dimension reduction using PCA.</p>
+
<ul>
+
<li><strong>Dataset preprocessing</strong></li>
+
 
+
</ul>
+
<p>In our experiments, we used images from Caltech-256 dataset. In order to meet the input conditions of VGG16 network, we preprocessed and screened the images to 224*224*3 size, selected 25 of the 256 kinds of images, and divided them into training set and test set.</p>
+
<ul>
+
<li><strong>Establishment of Image Feature Library</strong></li>
+
 
+
</ul>
+
<p>Feature vectors that are effective for similarity search tend to be high dimensional. The nearest neighbors in each of these subspaces (with respect to Euclidean or Cosine distance) appear more similar to the query than the nearest neighbors in lower-dimensional spaces.<sup class='md-footnote'><a href='#dfref-footnote-2' name='ref-footnote-2'>2</a></sup></p>
+
<ul>
+
<li><strong>Query image feature vector calculation</strong></li>
+
 
+
</ul>
+
<p>For input query images, we first use VGG16 network to get high dimension feature, and use same parameters computed in the PCA step above to reduce the dimension of features and obtain vectors with the length we desire.</p>
+
<ul>
+
<li><strong>Similarity calculation and searching</strong></li>
+
 
+
</ul>
+
<p>By calculating cosine distance between query image feature vector and feature vectors of pictures from the database (stored in DNA), and sort the images in the database by similarity score, we can obtain search results.</p>
+
 
+
 
+
<div class="demo" style="text-align: center;">
+
<img src="https://static.igem.org/mediawiki/2019/d/dd/T--Tsinghua-A--cosine_dist.png">
+
</div>
+
  
<h3>Implementation</h3>
+
</ul>
<p>As mentioned above, we use pre-trained VGG16 network to perform feature extraction.</p>
+
<p>At present, there are many existing neural networks that can effectively handle image feature extraction. Here in this project, we choose pre-trained VGG16 network and Principal Component Analysis to perform feature extraction on Caltech-256 database, and validate the feasibility of this method as well as determine the length of the final retained feature vector.</p>
<p>In initial experiments, we use pre-trained single hidden layered net work to perform feature dimension reduction, in which the number of nodes in the hidden layer is the number of dimensions after dimension reduction. At first, this method has achieved good results, but because of its supervised calculation process, it takes some &#39;experience&#39; to organize image label, and the algorithm also lacks scalability. Therefore, we then use PCA method to reduce the dimension of feature and achieved similar accuracy.</p>
+
<ul>
<p>By using the methods above, we have successfully established a image feature database, when a picture is queried in the system, all we have to do is to run it through the same pipeline.</p>
+
<li><strong>Similarity criteria</strong></li>
<h3>Experiments</h3>
+
<p>For the above-mentioned image database, we have carried out a series of experiments to verify the feasibility of this method. At the same time, we have determined the length of the features to be retained and the similarity criteria to be used through experiments. </p>
+
<ul>
+
<li>Fixed number of query image (N=5),use different length of feature vectors (L=1 to 100) to carry out search:</li>
+
  
</ul>
+
</ul>
 +
<p>In feature space, there are multiple methods to measure similarity between feature vectors, such as Euclidean distance, Cosine distance and Hamming distance. Through experiments, we find that cosine distance has the highest accuracy and stability under various experimental conditions (including different feature dimensions and image types), therefore we finally chose cosine distance as the ultimate measure of similarity.</p>
  
<div class="demo" style="text-align: center;">
+
<a name="Algorithm"></a>
<img src="https://static.igem.org/mediawiki/2019/4/4c/T--Tsinghua-A--feat_len17.png">
+
<h3>Algorithm structure &amp; principles</h3>
</div>
+
<ul>
 +
<li><strong>VGG16 Network</strong></li>
  
 +
</ul>
 +
<div class="demo" style="text-align: center;">
 +
<img src="https://static.igem.org/mediawiki/2019/f/f7/T--Tsinghua-A--vgg16.png">
 +
</div>
 +
<p>  VGG16 is a convolutional neural network model proposed by K. Simonyan and A. Zisserman from the University of Oxford in the paper “Very Deep Convolutional Networks for Large-Scale Image Recognition”. The model achieves 92.7% top-5 test accuracy in ImageNet, which is a dataset of over 14 million images belonging to 1000 classes.</p>
 +
<p> In this project, we utilize VGG16&#39;s &#39;intermediate product&#39;: the output of ReLU layer, which is a 1*1*4096 vector. We then tested two schemes to further reduce the size of feature vector: the first one is adding another layer of neural network, which is similarly with the original VGG16 shown above but with adjustable vector length; the second one performs dimension reduction using PCA.</p>
 +
<ul>
 +
<li><strong>Dataset preprocessing</strong></li>
  
<div class="demo" style="text-align: center;">
+
</ul>
<img src="https://static.igem.org/mediawiki/2019/9/91/T--Tsinghua-A--feat_len19.png">
+
<p>In our experiments, we used images from Caltech-256 dataset. In order to meet the input conditions of VGG16 network, we preprocessed and screened the images to 224*224*3 size, selected 25 of the 256 kinds of images, and divided them into training set and test set.</p>
</div>
+
<ul>
 +
<li><strong>Establishment of Image Feature Library</strong></li>
  
 +
</ul>
 +
<p>Feature vectors that are effective for similarity search tend to be high dimensional. The nearest neighbors in each of these subspaces (with respect to Euclidean or Cosine distance) appear more similar to the query than the nearest neighbors in lower-dimensional spaces.</p>
 +
<ul>
 +
<li><strong>Query image feature vector calculation</strong></li>
  
<ul>
+
</ul>
<li>Fixed feature vector length (L=10),using Euclidean distance and cosine similarity as criteria to search different numbers of pictures (1 to 30):</li>
+
<p>For input query images, we first use VGG16 network to get high dimension feature, and use same parameters computed in the PCA step above to reduce the dimension of features and obtain vectors with the length we desire.</p>
 +
<ul>
 +
<li><strong>Similarity calculation and searching</strong></li>
  
</ul>
+
</ul>
 +
<p>By calculating cosine distance between query image feature vector and feature vectors of pictures from the database (stored in DNA), and sort the images in the database by similarity score, we can obtain search results.</p>
  
<div class="demo" style="text-align: center;">
+
<div class="demo" style="text-align: center;">
<img src="https://static.igem.org/mediawiki/2019/3/30/T--Tsinghua-A--euc_cos.png">
+
<img src="https://static.igem.org/mediawiki/2019/d/dd/T--Tsinghua-A--cosine_dist.png">
</div>
+
</div>
  
<ul>
+
<a name="Implementation"></a>
<li>Clustering method is used to preprocess the feature database, obtain all kinds of central features, and then search in the clustering results:</li>
+
<h3>Implementation</h3>
 +
<p>As mentioned above, we use pre-trained VGG16 network to perform feature extraction.</p>
 +
<p>In initial experiments, we use pre-trained single hidden layered net work to perform feature dimension reduction, in which the number of nodes in the hidden layer is the number of dimensions after dimension reduction. At first, this method has achieved good results, but because of its supervised calculation process, it takes some &#39;experience&#39; to organize image label, and the algorithm also lacks scalability. Therefore, we then use PCA method to reduce the dimension of feature and achieved similar accuracy.</p>
 +
<p>By using the methods above, we have successfully established a image feature database, when a picture is queried in the system, all we have to do is to run it through the same pipeline.</p>
  
</ul>
+
<a name="Experiments"></a>
 +
<h3>Experiments</h3>
 +
<p>For the above-mentioned image database, we have carried out a series of experiments to verify the feasibility of this method. At the same time, we have determined the length of the features to be retained and the similarity criteria to be used through experiments. </p>
 +
<ul>
 +
<li>Fixed number of query image (N=5),use different length of feature vectors (L=1 to 100) to carry out search:</li>
  
<div class="demo" style="text-align: center;">
+
</ul>
<img src="https://static.igem.org/mediawiki/2019/a/a8/T--Tsinghua-A--bear.png">
+
</div>
+
  
 +
<div class="demo" style="text-align: center;">
 +
<img src="https://static.igem.org/mediawiki/2019/4/4c/T--Tsinghua-A--feat_len17.png">
 +
</div>
 +
<div class="demo" style="text-align: center;">
 +
<img src="https://static.igem.org/mediawiki/2019/9/91/T--Tsinghua-A--feat_len19.png">
 +
</div>
 +
<ul>
 +
<li>Fixed feature vector length (L=10),using Euclidean distance and cosine similarity as criteria to search different numbers of pictures (1 to 30):</li>
  
<div class="demo" style="text-align: center;">
+
</ul>
<img src="https://static.igem.org/mediawiki/2019/7/78/T--Tsinghua-A--cluster.png">
+
<div class="demo" style="text-align: center;">
</div>
+
<img src="https://static.igem.org/mediawiki/2019/3/30/T--Tsinghua-A--euc_cos.png">
 +
</div>
 +
<ul>
 +
<li>Clustering method is used to preprocess the feature database, obtain all kinds of central features, and then search in the clustering results:</li>
  
<p>&nbsp;</p>
+
</ul>
<p>&nbsp;</p>
+
  
 +
<div class="demo" style="text-align: center;">
 +
<img src="https://static.igem.org/mediawiki/2019/a/a8/T--Tsinghua-A--bear.png">
 +
</div>
 +
<div class="demo" style="text-align: center;">
 +
<img src="https://static.igem.org/mediawiki/2019/7/78/T--Tsinghua-A--cluster.png">
 +
</div>
 +
<p>&nbsp;</p>
 +
<p>&nbsp;</p>
  
  
<div class='footnotes-area'  ><hr/>
+
<p>&nbsp;</p>
<div class='footnote-line'><span class='md-fn-count'>1</span> <a href='https://neurohive.io/en/popular-networks/vgg16/' target='_blank' class='url'>https://neurohive.io/en/popular-networks/vgg16/</a><a name='dfref-footnote-1' href='#ref-footnote-1' title='back to document' class='reversefootnote' >↩</a></div>
+
</div>
<div class='footnote-line'><span class='md-fn-count'>2</span> A Content-Addressable DNA Database with Learned Sequence Encodings, Kendall Stewart et. al. <a name='dfref-footnote-2' href='#ref-footnote-2' title='back to document' class='reversefootnote' >↩</a></div></div>
+
 
<div class="left_part" style="
 
<div class="left_part" style="
 
            margin-left: 0em;
 
            margin-left: 0em;
Line 132: Line 124:
 
<div class="tabs-container tabs--vertical page-navigator font_big" style="position: -webkit-sticky;position: sticky;adding-bottom: 0px;margin-top: 11.4em;">
 
<div class="tabs-container tabs--vertical page-navigator font_big" style="position: -webkit-sticky;position: sticky;adding-bottom: 0px;margin-top: 11.4em;">
 
<h1 style="padding-left: 2%">
 
<h1 style="padding-left: 2%">
Index
+
Feature Extraction
 
</h1>
 
</h1>
 
<ul class="lateral">
 
<ul class="lateral">
Line 138: Line 130:
 
<div style="line-height: 100%;">
 
<div style="line-height: 100%;">
  
<a href="#Introduction">Introduction</a>
+
<a href="#Overview">Overview</a>
 
</div>
 
</div>
<ul class="lateral" style="padding-bottom: 0px;">
 
<li class="lateral" style="padding-bottom: 5px;font-size: 20;">
 
<a href="#Bio">Bio</a>
 
</li>
 
<li class="lateral" style="padding-bottom: 5px;font-size: 20;">
 
<a href="#Err">Err</a>
 
</li>
 
<li class="lateral" style="padding-bottom: 5px;font-size: 20;">
 
<a href="#Inf">Inf</a>
 
</li>
 
</ul>
 
 
</li>
 
</li>
 
<li class="lateral">
 
<li class="lateral">
 
<div style="line-height: 100%;">
 
<div style="line-height: 100%;">
<a href="#Source Encoding">Source Encoding</a>
+
<a href="#Algorithm">Algorithm</a>
 
</div>
 
</div>
 
</li>
 
</li>
 
<li class="lateral">
 
<li class="lateral">
 
<div style="line-height: 100%;">
 
<div style="line-height: 100%;">
<a href="#Channel Encoding">Channel Encoding</a>
+
<a href="#Implementation">Implementation</a>
 
</div>
 
</div>
 
</li>
 
</li>
 
<li class="lateral">
 
<li class="lateral">
 
<div style="line-height: 100%;">
 
<div style="line-height: 100%;">
<a href="#Tests"><strong>Tests</strong></a>
+
<a href="#Experiments">Experiments</a>
 
</div>
 
</div>
 
</li>
 
</li>

Latest revision as of 02:13, 10 December 2019

Overview

Image similarity search is widely applied both in industry and in our daily life, which helps us retrieve enormous amount of images from the Internet in no time. The early versions of image similarity search algorithms used in search engines are usually based a non-neural network method, known as Perceptual Hashing Algorithm. It computes a digital fingerprint of each image by image normalization, color simplification, gray scale comparison, hash value calculation and so on. This hash-based algorithm measures image similarity by comparing the fingerprint of different pictures.

In our project, there are a certain similarity between retrieval in DNA storage library and data retrieval in the server. These two kinds of retrieval both require feature extraction from pictures as a measurement of image similarity. Based on their comparability,, two specific problems needs to be solved:

  • Image feature extraction:

At present, there are many existing neural networks that can effectively handle image feature extraction. Here in this project, we choose pre-trained VGG16 network and Principal Component Analysis to perform feature extraction on Caltech-256 database, and validate the feasibility of this method as well as determine the length of the final retained feature vector.

  • Similarity criteria

In feature space, there are multiple methods to measure similarity between feature vectors, such as Euclidean distance, Cosine distance and Hamming distance. Through experiments, we find that cosine distance has the highest accuracy and stability under various experimental conditions (including different feature dimensions and image types), therefore we finally chose cosine distance as the ultimate measure of similarity.

Algorithm structure & principles

  • VGG16 Network

VGG16 is a convolutional neural network model proposed by K. Simonyan and A. Zisserman from the University of Oxford in the paper “Very Deep Convolutional Networks for Large-Scale Image Recognition”. The model achieves 92.7% top-5 test accuracy in ImageNet, which is a dataset of over 14 million images belonging to 1000 classes.

In this project, we utilize VGG16's 'intermediate product': the output of ReLU layer, which is a 1*1*4096 vector. We then tested two schemes to further reduce the size of feature vector: the first one is adding another layer of neural network, which is similarly with the original VGG16 shown above but with adjustable vector length; the second one performs dimension reduction using PCA.

  • Dataset preprocessing

In our experiments, we used images from Caltech-256 dataset. In order to meet the input conditions of VGG16 network, we preprocessed and screened the images to 224*224*3 size, selected 25 of the 256 kinds of images, and divided them into training set and test set.

  • Establishment of Image Feature Library

Feature vectors that are effective for similarity search tend to be high dimensional. The nearest neighbors in each of these subspaces (with respect to Euclidean or Cosine distance) appear more similar to the query than the nearest neighbors in lower-dimensional spaces.

  • Query image feature vector calculation

For input query images, we first use VGG16 network to get high dimension feature, and use same parameters computed in the PCA step above to reduce the dimension of features and obtain vectors with the length we desire.

  • Similarity calculation and searching

By calculating cosine distance between query image feature vector and feature vectors of pictures from the database (stored in DNA), and sort the images in the database by similarity score, we can obtain search results.

Implementation

As mentioned above, we use pre-trained VGG16 network to perform feature extraction.

In initial experiments, we use pre-trained single hidden layered net work to perform feature dimension reduction, in which the number of nodes in the hidden layer is the number of dimensions after dimension reduction. At first, this method has achieved good results, but because of its supervised calculation process, it takes some 'experience' to organize image label, and the algorithm also lacks scalability. Therefore, we then use PCA method to reduce the dimension of feature and achieved similar accuracy.

By using the methods above, we have successfully established a image feature database, when a picture is queried in the system, all we have to do is to run it through the same pipeline.

Experiments

For the above-mentioned image database, we have carried out a series of experiments to verify the feasibility of this method. At the same time, we have determined the length of the features to be retained and the similarity criteria to be used through experiments.

  • Fixed number of query image (N=5),use different length of feature vectors (L=1 to 100) to carry out search:
  • Fixed feature vector length (L=10),using Euclidean distance and cosine similarity as criteria to search different numbers of pictures (1 to 30):
  • Clustering method is used to preprocess the feature database, obtain all kinds of central features, and then search in the clustering results: