1
+ <!DOCTYPE html> < html lang ="zh-CN ">
2
+ < head >
3
+ < meta http-equiv ="content-type " content ="text/html; charset=UTF-8 " />
4
+ < meta http-equiv ="X-UA-Compatible " content ="IE=Edge " />
5
+ < meta charset ="utf-8 " />
6
+ < meta name ="viewport " content ="width=device-width, minimum-scale=1.0, maximum-scale=1.0 " />
7
+ < title > CNN</ title >
8
+ < meta name ="
author "
content ="
[email protected] "
/>
9
+ < meta name ="copyright " content ="SMRUCC genomics Copyright (c) 2022 " />
10
+ < meta name ="keywords " content ="R#; CNN; MLkit " />
11
+ < meta name ="generator " content ="https://github.com/rsharp-lang " />
12
+ < meta name ="theme-color " content ="#333 " />
13
+ < meta name ="description " content ="feed-forward phase of deep Convolutional Neural Networks... " />
14
+ < meta class ="foundation-data-attribute-namespace " />
15
+ < meta class ="foundation-mq-xxlarge " />
16
+ < meta class ="foundation-mq-xlarge " />
17
+ < meta class ="foundation-mq-large " />
18
+ < meta class ="foundation-mq-medium " />
19
+ < meta class ="foundation-mq-small " />
20
+ < meta class ="foundation-mq-topbar " />
21
+ < style >
22
+
23
+ .table-three-line {
24
+ border-collapse : collapse; /* 关键属性:合并表格内外边框(其实表格边框有2px,外面1px,里面还有1px哦) */
25
+ border : solid # 000000 ; /* 设置边框属性;样式(solid=实线)、颜色(#999=灰) */
26
+ border-width : 2px 0 2px 0px ; /* 设置边框状粗细:上 右 下 左 = 对应:1px 0 0 1px */
27
+ }
28
+ .left-1 {
29
+ border : solid # 000000 ;border-width : 1px 1px 2px 0px ;padding : 2px ;
30
+ font-weight : bolder;
31
+ }
32
+ .right-1 {
33
+ border : solid # 000000 ;border-width : 1px 0px 2px 1px ;padding : 2px ;
34
+ font-weight : bolder;
35
+ }
36
+ .mid-1 {
37
+ border : solid # 000000 ;border-width : 1px 1px 2px 1px ;padding : 2px ;
38
+ font-weight : bolder;
39
+ }
40
+ .left {
41
+ border : solid # 000000 ;border-width : 1px 1px 1px 0px ;padding : 2px ;
42
+ }
43
+ .right {
44
+ border : solid # 000000 ;border-width : 1px 0px 1px 1px ;padding : 2px ;
45
+ }
46
+ .mid {
47
+ border : solid # 000000 ;border-width : 1px 1px 1px 1px ;padding : 2px ;
48
+ }
49
+ table caption {font-size : 14px ;font-weight : bolder;}
50
+ </ style >
51
+ </ head >
52
+ < body >
53
+ < table width ="100% " summary ="page for {CNN} ">
54
+ < tbody >
55
+ < tr >
56
+ < td > {CNN}</ td >
57
+ < td style ="text-align: right; "> R# Documentation</ td >
58
+ </ tr >
59
+ </ tbody >
60
+ </ table >
61
+ < h1 > CNN</ h1 >
62
+ < hr />
63
+ < p style =" font-size: 1.125em; line-height: .8em; margin-left: 0.5%; background-color: #fbfbfb; padding: 24px; ">
64
+ < code >
65
+ < span style ="color: blue; "> require</ span > (< span style ="color: black; font-weight: bold; "> R</ span > );
66
+ < br /> < br /> < span style ="color: green; "> #' feed-forward phase of deep Convolutional Neural Networks</ span > < br /> < span style ="color: blue; "> imports</ span > < span style ="color: brown "> "CNN"</ span > < span style ="color: blue; "> from</ span > < span style ="color: brown "> "MLkit"</ span > ;
67
+ </ code >
68
+ </ p >
69
+ < p > < p > feed-forward phase of deep Convolutional Neural Networks</ p > </ p >
70
+ < blockquote >
71
+ < p style ="font-style: italic; font-size: 0.9em; ">
72
+ < p > feed-forward phase of deep Convolutional Neural Networks</ p >
73
+ </ p >
74
+ </ blockquote >
75
+ < div id ="main-wrapper ">
76
+ < table class ="table-three-line ">
77
+ < tbody > < tr >
78
+ < td id ="n_threads ">
79
+ < a href ="./CNN/n_threads.html "> n_threads</ a >
80
+ </ td >
81
+ < td > < p > get/set of the CNN parallel thread number</ p > </ td >
82
+ </ tr >
83
+ < tr >
84
+ < td id ="cnn ">
85
+ < a href ="./CNN/cnn.html "> cnn</ a >
86
+ </ td >
87
+ < td > < p > Create a new CNN model</ p >
88
+
89
+ < p > Convolutional neural network (CNN) is a regularized type of feed-forward< br />
90
+ neural network that learns feature engineering by itself via filters < br />
91
+ (or kernel) optimization. Vanishing gradients and exploding gradients, < br />
92
+ seen during backpropagation in earlier neural networks, are prevented by < br />
93
+ using regularized weights over fewer connections.</ p > </ td >
94
+ </ tr >
95
+ < tr >
96
+ < td id ="input_layer ">
97
+ < a href ="./CNN/input_layer.html "> input_layer</ a >
98
+ </ td >
99
+ < td > < p > The input layer is a simple layer that will pass the data though and< br />
100
+ create a window into the full training data set. So for instance if< br />
101
+ we have an image of size 28x28x1 which means that we have 28 pixels< br />
102
+ in the x axle and 28 pixels in the y axle and one color (gray scale),< br />
103
+ then this layer might give you a window of another size example 24x24x1< br />
104
+ that is randomly chosen in order to create some distortion into the< br />
105
+ dataset so the algorithm don't over-fit the training.</ p > </ td >
106
+ </ tr >
107
+ < tr >
108
+ < td id ="regression_layer ">
109
+ < a href ="./CNN/regression_layer.html "> regression_layer</ a >
110
+ </ td >
111
+ < td > </ td >
112
+ </ tr >
113
+ < tr >
114
+ < td id ="conv_layer ">
115
+ < a href ="./CNN/conv_layer.html "> conv_layer</ a >
116
+ </ td >
117
+ < td > < p > This layer uses different filters to find attributes of the data that< br />
118
+ affects the result. As an example there could be a filter to find< br />
119
+ horizontal edges in an image.</ p > </ td >
120
+ </ tr >
121
+ < tr >
122
+ < td id ="conv_transpose_layer ">
123
+ < a href ="./CNN/conv_transpose_layer.html "> conv_transpose_layer</ a >
124
+ </ td >
125
+ < td > </ td >
126
+ </ tr >
127
+ < tr >
128
+ < td id ="lrn_layer ">
129
+ < a href ="./CNN/lrn_layer.html "> lrn_layer</ a >
130
+ </ td >
131
+ < td > < p > This layer is useful when we are dealing with ReLU neurons. Why is that?< br />
132
+ Because ReLU neurons have unbounded activations and we need LRN to normalize< br />
133
+ that. We want to detect high frequency features with a large response. If we< br />
134
+ normalize around the local neighborhood of the excited neuron, it becomes even< br />
135
+ more sensitive as compared to its neighbors.</ p >
136
+
137
+ < p > At the same time, it will dampen the responses that are uniformly large in any< br />
138
+ given local neighborhood. If all the values are large, then normalizing those< br />
139
+ values will diminish all of them. So basically we want to encourage some kind< br />
140
+ of inhibition and boost the neurons with relatively larger activations. This< br />
141
+ has been discussed nicely in Section 3.3 of the original paper by Krizhevsky et al.</ p > </ td >
142
+ </ tr >
143
+ < tr >
144
+ < td id ="tanh_layer ">
145
+ < a href ="./CNN/tanh_layer.html "> tanh_layer</ a >
146
+ </ td >
147
+ < td > < p > Implements Tanh nonlinearity elementwise x to tanh(x)< br />
148
+ so the output is between -1 and 1.</ p > </ td >
149
+ </ tr >
150
+ < tr >
151
+ < td id ="softmax_layer ">
152
+ < a href ="./CNN/softmax_layer.html "> softmax_layer</ a >
153
+ </ td >
154
+ < td > < p > [*loss_layers] This layer will squash the result of the activations in the fully< br />
155
+ connected layer and give you a value of 0 to 1 for all output activations.</ p > </ td >
156
+ </ tr >
157
+ < tr >
158
+ < td id ="relu_layer ">
159
+ < a href ="./CNN/relu_layer.html "> relu_layer</ a >
160
+ </ td >
161
+ < td > < p > This is a layer of neurons that applies the non-saturating activation< br />
162
+ function f(x)=max(0,x). It increases the nonlinear properties of the< br />
163
+ decision function and of the overall network without affecting the< br />
164
+ receptive fields of the convolution layer.</ p > </ td >
165
+ </ tr >
166
+ < tr >
167
+ < td id ="leaky_relu_layer ">
168
+ < a href ="./CNN/leaky_relu_layer.html "> leaky_relu_layer</ a >
169
+ </ td >
170
+ < td > </ td >
171
+ </ tr >
172
+ < tr >
173
+ < td id ="maxout_layer ">
174
+ < a href ="./CNN/maxout_layer.html "> maxout_layer</ a >
175
+ </ td >
176
+ < td > < p > Implements Maxout nonlinearity that computes x to max(x)< br />
177
+ where x is a vector of size group_size. Ideally of course,< br />
178
+ the input size should be exactly divisible by group_size</ p > </ td >
179
+ </ tr >
180
+ < tr >
181
+ < td id ="sigmoid_layer ">
182
+ < a href ="./CNN/sigmoid_layer.html "> sigmoid_layer</ a >
183
+ </ td >
184
+ < td > < p > Implements Sigmoid nonlinearity elementwise x to 1/(1+e^(-x))< br />
185
+ so the output is between 0 and 1.</ p > </ td >
186
+ </ tr >
187
+ < tr >
188
+ < td id ="pool_layer ">
189
+ < a href ="./CNN/pool_layer.html "> pool_layer</ a >
190
+ </ td >
191
+ < td > < p > This layer will reduce the dataset by creating a smaller zoomed out< br />
192
+ version. In essence you take a cluster of pixels take the sum of them< br />
193
+ and put the result in the reduced position of the new image.</ p > </ td >
194
+ </ tr >
195
+ < tr >
196
+ < td id ="dropout_layer ">
197
+ < a href ="./CNN/dropout_layer.html "> dropout_layer</ a >
198
+ </ td >
199
+ < td > < p > This layer will remove some random activations in order to< br />
200
+ defeat over-fitting.</ p > </ td >
201
+ </ tr >
202
+ < tr >
203
+ < td id ="full_connected_layer ">
204
+ < a href ="./CNN/full_connected_layer.html "> full_connected_layer</ a >
205
+ </ td >
206
+ < td > < p > Neurons in a fully connected layer have full connections to all< br />
207
+ activations in the previous layer, as seen in regular Neural Networks.< br />
208
+ Their activations can hence be computed with a matrix multiplication< br />
209
+ followed by a bias offset.</ p > </ td >
210
+ </ tr >
211
+ < tr >
212
+ < td id ="gaussian_layer ">
213
+ < a href ="./CNN/gaussian_layer.html "> gaussian_layer</ a >
214
+ </ td >
215
+ < td > </ td >
216
+ </ tr >
217
+ < tr >
218
+ < td id ="sample_dataset ">
219
+ < a href ="./CNN/sample_dataset.html "> sample_dataset</ a >
220
+ </ td >
221
+ < td > </ td >
222
+ </ tr >
223
+ < tr >
224
+ < td id ="sample_dataset.image ">
225
+ < a href ="./CNN/sample_dataset.image.html "> sample_dataset.image</ a >
226
+ </ td >
227
+ < td > </ td >
228
+ </ tr >
229
+ < tr >
230
+ < td id ="auto_encoder ">
231
+ < a href ="./CNN/auto_encoder.html "> auto_encoder</ a >
232
+ </ td >
233
+ < td > </ td >
234
+ </ tr >
235
+ < tr >
236
+ < td id ="training ">
237
+ < a href ="./CNN/training.html "> training</ a >
238
+ </ td >
239
+ < td > < p > Do CNN network model training</ p > </ td >
240
+ </ tr >
241
+ < tr >
242
+ < td id ="ada_delta ">
243
+ < a href ="./CNN/ada_delta.html "> ada_delta</ a >
244
+ </ td >
245
+ < td > < p > Adaptive delta will look at the differences between the expected result and the current result to train the network.</ p > </ td >
246
+ </ tr >
247
+ < tr >
248
+ < td id ="ada_grad ">
249
+ < a href ="./CNN/ada_grad.html "> ada_grad</ a >
250
+ </ td >
251
+ < td > < p > The adaptive gradient trainer will over time sum up the square of< br />
252
+ the gradient and use it to change the weights.</ p > </ td >
253
+ </ tr >
254
+ < tr >
255
+ < td id ="adam ">
256
+ < a href ="./CNN/adam.html "> adam</ a >
257
+ </ td >
258
+ < td > < p > Adaptive Moment Estimation is an update to RMSProp optimizer. In this running average of both the< br />
259
+ gradients and their magnitudes are used.</ p > </ td >
260
+ </ tr >
261
+ < tr >
262
+ < td id ="nesterov ">
263
+ < a href ="./CNN/nesterov.html "> nesterov</ a >
264
+ </ td >
265
+ < td > < p > Another extension of gradient descent is due to Yurii Nesterov from 1983,[7] and has been subsequently generalized</ p > </ td >
266
+ </ tr >
267
+ < tr >
268
+ < td id ="sgd ">
269
+ < a href ="./CNN/sgd.html "> sgd</ a >
270
+ </ td >
271
+ < td > < p > Stochastic gradient descent (often shortened in SGD), also known as incremental gradient descent, is a< br />
272
+ stochastic approximation of the gradient descent optimization method for minimizing an objective function< br />
273
+ that is written as a sum of differentiable functions. In other words, SGD tries to find minimums or< br />
274
+ maximums by iteration.</ p > </ td >
275
+ </ tr >
276
+ < tr >
277
+ < td id ="window_grad ">
278
+ < a href ="./CNN/window_grad.html "> window_grad</ a >
279
+ </ td >
280
+ < td > < p > This is AdaGrad but with a moving window weighted average< br />
281
+ so the gradient is not accumulated over the entire history of the run.< br />
282
+ it's also referred to as Idea #1 in Zeiler paper on AdaDelta.</ p > </ td >
283
+ </ tr >
284
+ < tr >
285
+ < td id ="predict ">
286
+ < a href ="./CNN/predict.html "> predict</ a >
287
+ </ td >
288
+ < td > </ td >
289
+ </ tr >
290
+ < tr >
291
+ < td id ="CeNiN ">
292
+ < a href ="./CNN/CeNiN.html "> CeNiN</ a >
293
+ </ td >
294
+ < td > < p > load a CNN model from file</ p > </ td >
295
+ </ tr >
296
+ < tr >
297
+ < td id ="detectObject ">
298
+ < a href ="./CNN/detectObject.html "> detectObject</ a >
299
+ </ td >
300
+ < td > < p > classify a object from a given image data</ p > </ td >
301
+ </ tr >
302
+ < tr >
303
+ < td id ="saveModel ">
304
+ < a href ="./CNN/saveModel.html "> saveModel</ a >
305
+ </ td >
306
+ < td > < p > save the CNN model into a binary data file</ p > </ td >
307
+ </ tr > </ tbody >
308
+ </ table >
309
+ </ div >
310
+ < hr />
311
+ < div style ="text-align: center; "> [< a href ="../index.html "> Document Index</ a > ]</ div >
312
+ </ body >
313
+ </ html >
0 commit comments