Skip to content

Jastot/Rodinka_Neural_Network

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

57 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

N20-7 Image classifier

To-do

  • Research about convolutional layers
  • Research about basic layers:
    • Dense
    • Activation
    • Dropout
  • Research about pooling layers
  • Research about losses
  • Research about optimizers
  • Make a nice classifier model out of layers mentioned above

Comments on a model

About activation function

​ Definitely, our problem is a classification problem, meaning we will be using something like sigmoid function in our output layer. There are two sigmoid-like functions in the tensorflow library: sigmoid (genuine sigmoid) and softmax (sigmoid-like function). The difference is that in softmax inputs are depended, so the output probabilities will always sum to one, which is probably good for multiclass classification. All in all, if we have only two classes, neither is better. Just softmax output will sum to one.

ReLU actiovation function is considered a standard nowadays as it is easy to calculate , it doesn't saturate, and its non-linear. Also, it was shown that ReLU layers after filters suprisingly improve image classifiers performance. ReLU was used in convolutional layers.

About loss function

​ A cross entropy function was chosen. It minimizes -log(likelihood)​, thus strongly penalising the model when it outputs a very bed result. However, there are many versions of in the tensorflow library. The differnce is unknown to mes, but the binary one was used as we have only two classes.

About optimizer

​ Don't know yet; adam was used.

Created models

Model name Loss Accuracy Pic res Params
model_2_1 1.02 0.837 224x224 555k

Used to understand the topic

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors