Skip to content

Commit 5e393ab

Browse files
committed
Day42 update
1 parent 32e714c commit 5e393ab

File tree

4 files changed

+289
-0
lines changed

4 files changed

+289
-0
lines changed

Code/Day 42.ipynb

+288
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,288 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"metadata": {},
6+
"source": [
7+
"这是Python,TensorFlow和Keras教程系列的深度学习基础知识的第4部分。\n",
8+
"\n",
9+
"在这一部分,我们将讨论的是TensorBoard。TensorBoard是一个方便的应用程序,允许您在浏览器中查看模型或模型的各个方面。我们将TensorBoard与Keras一起使用的方式是通过Keras回调。实际上有很多Keras回调,你可以自己制作。"
10+
]
11+
},
12+
{
13+
"cell_type": "code",
14+
"execution_count": 1,
15+
"metadata": {},
16+
"outputs": [
17+
{
18+
"name": "stderr",
19+
"output_type": "stream",
20+
"text": [
21+
"Using TensorFlow backend.\n"
22+
]
23+
}
24+
],
25+
"source": [
26+
"from keras.callbacks import TensorBoard\n",
27+
"from keras."
28+
]
29+
},
30+
{
31+
"cell_type": "code",
32+
"execution_count": 2,
33+
"metadata": {
34+
"collapsed": true
35+
},
36+
"outputs": [],
37+
"source": [
38+
"#创建TensorBoard回调对象\n",
39+
"NAME = \"Cats-vs-dogs-CNN\"\n",
40+
"\n",
41+
"tensorboard = TensorBoard(log_dir=\"logs/{}\".format(NAME))"
42+
]
43+
},
44+
{
45+
"cell_type": "markdown",
46+
"metadata": {},
47+
"source": [
48+
"最终,你会希望获得更多的自定义NAME,但现在这样做。因此,这将保存模型的训练数据logs/NAME,然后由TensorBoard读取。\n",
49+
"\n",
50+
"最后,我们可以通过将它添加到.fit方法中来将此回调添加到我们的模型中,例如:\n",
51+
"```python\n",
52+
"model.fit(X, y,\n",
53+
" batch_size=32,\n",
54+
" epochs=3,\n",
55+
" validation_split=0.3,\n",
56+
" callbacks=[tensorboard])\n",
57+
"```\n",
58+
"请注意,这callbacks是一个列表。您也可以将其他回调传递到此列表中。我们的模型还没有定义,所以现在让我们把它们放在一起:"
59+
]
60+
},
61+
{
62+
"cell_type": "code",
63+
"execution_count": 4,
64+
"metadata": {},
65+
"outputs": [
66+
{
67+
"name": "stdout",
68+
"output_type": "stream",
69+
"text": [
70+
"Train on 17462 samples, validate on 7484 samples\n",
71+
"Epoch 1/3\n",
72+
"17462/17462 [==============================] - 44s 3ms/step - loss: 0.6992 - acc: 0.5480 - val_loss: 0.6900 - val_acc: 0.5274\n",
73+
"Epoch 2/3\n",
74+
"17462/17462 [==============================] - 41s 2ms/step - loss: 0.6754 - acc: 0.5782 - val_loss: 0.6685 - val_acc: 0.5885\n",
75+
"Epoch 3/3\n",
76+
"17462/17462 [==============================] - 41s 2ms/step - loss: 0.6377 - acc: 0.6483 - val_loss: 0.6217 - val_acc: 0.6625\n"
77+
]
78+
},
79+
{
80+
"data": {
81+
"text/plain": [
82+
"<keras.callbacks.History at 0x7ff86d691c18>"
83+
]
84+
},
85+
"execution_count": 4,
86+
"metadata": {},
87+
"output_type": "execute_result"
88+
}
89+
],
90+
"source": [
91+
"import tensorflow as tf\n",
92+
"from tensorflow.keras.datasets import cifar10\n",
93+
"from tensorflow.keras.preprocessing.image import ImageDataGenerator\n",
94+
"from tensorflow.keras.models import Sequential\n",
95+
"from tensorflow.keras.layers import Dense, Dropout, Activation, Flatten\n",
96+
"from tensorflow.keras.layers import Conv2D, MaxPooling2D\n",
97+
"from tensorflow.keras.callbacks import TensorBoard\n",
98+
"# more info on callbakcs: https://keras.io/callbacks/ model saver is cool too.\n",
99+
"import pickle\n",
100+
"import time\n",
101+
"\n",
102+
"NAME = \"Cats-vs-dogs-CNN\"\n",
103+
"\n",
104+
"pickle_in = open(\"../datasets/X.pickle\",\"rb\")\n",
105+
"X = pickle.load(pickle_in)\n",
106+
"\n",
107+
"pickle_in = open(\"../datasets/y.pickle\",\"rb\")\n",
108+
"y = pickle.load(pickle_in)\n",
109+
"\n",
110+
"X = X/255.0\n",
111+
"\n",
112+
"model = Sequential()\n",
113+
"\n",
114+
"model.add(Conv2D(256, (3, 3), input_shape=X.shape[1:]))\n",
115+
"model.add(Activation('relu'))\n",
116+
"model.add(MaxPooling2D(pool_size=(2, 2)))\n",
117+
"\n",
118+
"model.add(Conv2D(256, (3, 3)))\n",
119+
"model.add(Activation('relu'))\n",
120+
"model.add(MaxPooling2D(pool_size=(2, 2)))\n",
121+
"\n",
122+
"model.add(Flatten()) # this converts our 3D feature maps to 1D feature vectors\n",
123+
"model.add(Dense(64))\n",
124+
"\n",
125+
"model.add(Dense(1))\n",
126+
"model.add(Activation('sigmoid'))\n",
127+
"\n",
128+
"tensorboard = TensorBoard(log_dir=\"logs/{}\".format(NAME))\n",
129+
"\n",
130+
"model.compile(loss='binary_crossentropy',\n",
131+
" optimizer='adam',\n",
132+
" metrics=['accuracy'],\n",
133+
" )\n",
134+
"\n",
135+
"model.fit(X, y,\n",
136+
" batch_size=32,\n",
137+
" epochs=3,\n",
138+
" validation_split=0.3,\n",
139+
" callbacks=[tensorboard])"
140+
]
141+
},
142+
{
143+
"cell_type": "markdown",
144+
"metadata": {},
145+
"source": [
146+
"运行此之后,您应该有一个名为的新目录logs。我们现在可以使用tensorboard从这个目录中可视化初始结果。打开控制台,切换到工作目录,然后键入:tensorboard --logdir=logs/。您应该看到一个通知:TensorBoard 1.10.0 at http://H-PC:6006 (Press CTRL+C to quit)“h-pc”是您机器的名称。打开浏览器并前往此地址。你应该看到类似的东西:\n",
147+
"<img src = \"https://pythonprogramming.net/static/images/machine-learning/tensorboard-basic.png\">\n",
148+
"现在我们可以看到我们的模型随着时间的推移。让我们改变模型中的一些东西。首先,我们从未在密集层中添加激活。另外,让我们尝试整体较小的模型:"
149+
]
150+
},
151+
{
152+
"cell_type": "code",
153+
"execution_count": 5,
154+
"metadata": {},
155+
"outputs": [
156+
{
157+
"name": "stdout",
158+
"output_type": "stream",
159+
"text": [
160+
"Train on 17462 samples, validate on 7484 samples\n",
161+
"Epoch 1/10\n",
162+
"17462/17462 [==============================] - 11s 604us/step - loss: 0.6033 - acc: 0.6652 - val_loss: 0.5298 - val_acc: 0.7320\n",
163+
"Epoch 2/10\n",
164+
"17462/17462 [==============================] - 11s 646us/step - loss: 0.4859 - acc: 0.7659 - val_loss: 0.4723 - val_acc: 0.7763\n",
165+
"Epoch 3/10\n",
166+
"17462/17462 [==============================] - 11s 641us/step - loss: 0.4270 - acc: 0.8045 - val_loss: 0.4603 - val_acc: 0.7803\n",
167+
"Epoch 4/10\n",
168+
"17462/17462 [==============================] - 12s 699us/step - loss: 0.3675 - acc: 0.8347 - val_loss: 0.4476 - val_acc: 0.7929\n",
169+
"Epoch 5/10\n",
170+
"17462/17462 [==============================] - 12s 707us/step - loss: 0.3012 - acc: 0.8694 - val_loss: 0.4854 - val_acc: 0.7797\n",
171+
"Epoch 6/10\n",
172+
"17462/17462 [==============================] - 12s 705us/step - loss: 0.2165 - acc: 0.9118 - val_loss: 0.5450 - val_acc: 0.7865\n",
173+
"Epoch 7/10\n",
174+
"17462/17462 [==============================] - 12s 712us/step - loss: 0.1332 - acc: 0.9510 - val_loss: 0.6512 - val_acc: 0.7821\n",
175+
"Epoch 8/10\n",
176+
"17462/17462 [==============================] - 12s 705us/step - loss: 0.0764 - acc: 0.9743 - val_loss: 0.7487 - val_acc: 0.7809\n",
177+
"Epoch 9/10\n",
178+
"17462/17462 [==============================] - 12s 713us/step - loss: 0.0389 - acc: 0.9887 - val_loss: 0.9041 - val_acc: 0.7743\n",
179+
"Epoch 10/10\n",
180+
"17462/17462 [==============================] - 12s 708us/step - loss: 0.0287 - acc: 0.9921 - val_loss: 1.0411 - val_acc: 0.7702\n"
181+
]
182+
},
183+
{
184+
"data": {
185+
"text/plain": [
186+
"<keras.callbacks.History at 0x7ff86073ec50>"
187+
]
188+
},
189+
"execution_count": 5,
190+
"metadata": {},
191+
"output_type": "execute_result"
192+
}
193+
],
194+
"source": [
195+
"from tensorflow.keras.models import Sequential\n",
196+
"from tensorflow.keras.layers import Dense, Dropout, Activation, Flatten\n",
197+
"from tensorflow.keras.layers import Conv2D, MaxPooling2D\n",
198+
"from tensorflow.keras.callbacks import TensorBoard\n",
199+
"# more info on callbakcs: https://keras.io/callbacks/ model saver is cool too.\n",
200+
"import pickle\n",
201+
"import time\n",
202+
"\n",
203+
"NAME = \"Cats-vs-dogs-64x2-CNN\"\n",
204+
"\n",
205+
"pickle_in = open(\"../datasets/X.pickle\",\"rb\")\n",
206+
"X = pickle.load(pickle_in)\n",
207+
"\n",
208+
"pickle_in = open(\"../datasets/y.pickle\",\"rb\")\n",
209+
"y = pickle.load(pickle_in)\n",
210+
"\n",
211+
"X = X/255.0\n",
212+
"\n",
213+
"model = Sequential()\n",
214+
"\n",
215+
"model.add(Conv2D(64, (3, 3), input_shape=X.shape[1:]))\n",
216+
"model.add(Activation('relu'))\n",
217+
"model.add(MaxPooling2D(pool_size=(2, 2)))\n",
218+
"\n",
219+
"model.add(Conv2D(64, (3, 3)))\n",
220+
"model.add(Activation('relu'))\n",
221+
"model.add(MaxPooling2D(pool_size=(2, 2)))\n",
222+
"\n",
223+
"model.add(Flatten()) # this converts our 3D feature maps to 1D feature vectors\n",
224+
"model.add(Dense(64))\n",
225+
"model.add(Activation('relu'))\n",
226+
"\n",
227+
"model.add(Dense(1))\n",
228+
"model.add(Activation('sigmoid'))\n",
229+
"\n",
230+
"tensorboard = TensorBoard(log_dir=\"logs/{}\".format(NAME))\n",
231+
"\n",
232+
"model.compile(loss='binary_crossentropy',\n",
233+
" optimizer='adam',\n",
234+
" metrics=['accuracy'],\n",
235+
" )\n",
236+
"\n",
237+
"model.fit(X, y,\n",
238+
" batch_size=32,\n",
239+
" epochs=10,\n",
240+
" validation_split=0.3,\n",
241+
" callbacks=[tensorboard])"
242+
]
243+
},
244+
{
245+
"cell_type": "markdown",
246+
"metadata": {},
247+
"source": [
248+
"除此之外,我还改名为NAME = \"Cats-vs-dogs-64x2-CNN\"。不要忘记这样做,否则你会偶然附加到你以前的型号的日志,它看起来不太好。我们现在检查TensorBoard:\n",
249+
"<img src = \"https://pythonprogramming.net/static/images/machine-learning/second-model-tensorboard.png\">\n",
250+
"看起来更好!但是,您可能会立即注意到验证丢失的形状。损失是衡量错误的标准,看起来很明显,在我们的第四个时代之后,事情开始变得糟糕。\n",
251+
"\n",
252+
"有趣的是,我们的验证准确性仍然持续,但我想它最终会开始下降。更可能的是,第一件遭受的事情确实是你的验证损失。这应该提醒你,你几乎肯定会开始过度适应。这种情况发生的原因是该模型不断尝试减少样本损失。\n",
253+
"\n",
254+
"在某些时候,模型不是学习关于实际数据的一般事物,而是开始只记忆输入数据。如果你继续这样做,是的,样本中的“准确性”会上升,但你的样本,以及你试图为模型提供的任何新数据可能会表现得很差。"
255+
]
256+
},
257+
{
258+
"cell_type": "code",
259+
"execution_count": null,
260+
"metadata": {
261+
"collapsed": true
262+
},
263+
"outputs": [],
264+
"source": []
265+
}
266+
],
267+
"metadata": {
268+
"kernelspec": {
269+
"display_name": "Python 3",
270+
"language": "python",
271+
"name": "python3"
272+
},
273+
"language_info": {
274+
"codemirror_mode": {
275+
"name": "ipython",
276+
"version": 3
277+
},
278+
"file_extension": ".py",
279+
"mimetype": "text/x-python",
280+
"name": "python",
281+
"nbconvert_exporter": "python",
282+
"pygments_lexer": "ipython3",
283+
"version": "3.6.2"
284+
}
285+
},
286+
"nbformat": 4,
287+
"nbformat_minor": 2
288+
}

Info-graphs/Day 42-1.png

43 KB
Loading

Info-graphs/Day 42-2.png

53.5 KB
Loading

README.md

+1
Original file line numberDiff line numberDiff line change
@@ -193,6 +193,7 @@ B站视频在[这里](https://space.bilibili.com/88461692/#/channel/detail?cid=2
193193

194194
## 第4部分 | 深度学习基础Python,TensorFlow和Keras | 第42天
195195
视频地址在[这里](https://www.youtube.com/watch?v=wQ8BIBpya2k&t=19s&index=2&list=PLQVvvaa0QuDfhTox0AjmQ6tvTgMBZBEXN)
196+
<br>中文文字版[notebook](https://github.com/MachineLearning100/100-Days-Of-ML-Code/blob/master/Code/Day%2042.ipynb)
196197

197198
## K-均值聚类 | 第43天
198199
转到无监督学习,并研究了聚类。可在[作者网站](http://www.avikjain.me/)查询。发现一个奇妙的[动画](http://shabal.in/visuals/kmeans/6.html)有助于理解K-均值聚类。

0 commit comments

Comments
 (0)