vault backup: 2023-07-07 10:04:26

This commit is contained in:
2023-07-07 10:04:26 +08:00
parent be4051ecc3
commit 027c80289b
2 changed files with 18 additions and 13 deletions

View File

@@ -39,14 +39,11 @@
"id": "60d345dd5ee6c642",
"type": "leaf",
"state": {
"type": "canvas",
"type": "markdown",
"state": {
"file": "00. Inbox/My Mindmap.canvas",
"viewState": {
"x": 427.9369240160196,
"y": -1566.930978851665,
"zoom": -0.45297146764304286
}
"file": "05. 資料收集/Keras - Dataset.md",
"mode": "source",
"source": true
}
}
}
@@ -117,7 +114,7 @@
"state": {
"type": "backlink",
"state": {
"file": "00. Inbox/My Mindmap.canvas",
"file": "05. 資料收集/Keras - Dataset.md",
"collapseAll": false,
"extraContext": false,
"sortOrder": "alphabetical",
@@ -142,7 +139,7 @@
"state": {
"type": "outline",
"state": {
"file": "00. Inbox/My Mindmap.canvas"
"file": "05. 資料收集/Keras - Dataset.md"
}
}
},
@@ -195,10 +192,13 @@
},
"active": "60d345dd5ee6c642",
"lastOpenFiles": [
"05. 資料收集/興趣嗜好/Fuji X-T5.md",
"05. 資料收集/Keras - Dataset.md",
"04. Programming/OpenCV.md",
"00. Inbox/My Mindmap.canvas",
"00. Inbox/01. TODO.md",
"01. 個人/01. Daily/2023-05-12(週五).md",
"01. 個人/01. Daily/2023-05-11(週四).md",
"00. Inbox/My Mindmap.canvas",
"00. Inbox/想吃的餐廳.md",
"00. Inbox/景點收集.md",
"05. 資料收集/架站/Storj.md",
@@ -206,7 +206,6 @@
"04. Programming/QT/QTableWidget.md",
"00. Inbox/Habit Tracker.md",
"01. 個人/01. Daily/2018/2018-10-12(週五).md",
"04. Programming/OpenCV.md",
"04. Programming/categorical_crossentropy.md",
"04. Programming/Python/argparse.ArgumentParser.md",
"05. 資料收集/皮質醇.md",
@@ -224,8 +223,6 @@
"05. 資料收集/稼動率.md",
"05. 資料收集/libsndfile.md",
"05. 資料收集/名言佳句.md",
"05. 資料收集/讀書筆記/20220619 - 精確的力量.md",
"05. 資料收集/讀書筆記/20220526 - 深入淺出設計模式.md",
"attachments/Pasted image 20230504183452.png",
"attachments/Pasted image 20230504183439.png",
"00. Inbox/想要的鏡頭/變焦",

View File

@@ -0,0 +1,8 @@
可以使用 `tensorflow.keras.utils.image_dataset_from_directory` 來建立 dataset。
dataset 會有 `data_batch``label_batch` 這兩個 member分別代表資料與標籤。
可以用 `dataset.batch(32)` 改變 batch size。
還有一些其他的有用function:
- `shuffle(buffer_size)`: 打亂順序
- `prefetch(buffer_size)`: 設定預讀的大小
- `map(callback_func)`: 用 callback_func 來處理資料
- `take(N)`: 取出第N筆的批次資料注意這一筆是一個批次資料裡面可能有32筆資料或其他數量看你的 `dataset.batch(N)` 怎麼設定)。