diff --git a/de-DE/images/8-background.png b/de-DE/images/8-background.png new file mode 100644 index 0000000..d27ef20 Binary files /dev/null and b/de-DE/images/8-background.png differ diff --git a/de-DE/images/allow-microphone.png b/de-DE/images/allow-microphone.png new file mode 100644 index 0000000..ea43466 Binary files /dev/null and b/de-DE/images/allow-microphone.png differ diff --git a/de-DE/images/banner.png b/de-DE/images/banner.png new file mode 100644 index 0000000..0a26f21 Binary files /dev/null and b/de-DE/images/banner.png differ diff --git a/de-DE/images/create-project.png b/de-DE/images/create-project.png new file mode 100644 index 0000000..1a48e1c Binary files /dev/null and b/de-DE/images/create-project.png differ diff --git a/de-DE/images/finished-code.png b/de-DE/images/finished-code.png new file mode 100644 index 0000000..502be93 Binary files /dev/null and b/de-DE/images/finished-code.png differ diff --git a/de-DE/images/new-blocks.png b/de-DE/images/new-blocks.png new file mode 100644 index 0000000..593366e Binary files /dev/null and b/de-DE/images/new-blocks.png differ diff --git a/de-DE/images/project-train.png b/de-DE/images/project-train.png new file mode 100644 index 0000000..660489d Binary files /dev/null and b/de-DE/images/project-train.png differ diff --git a/de-DE/images/projects-list.png b/de-DE/images/projects-list.png new file mode 100644 index 0000000..6938954 Binary files /dev/null and b/de-DE/images/projects-list.png differ diff --git a/de-DE/images/record-button.png b/de-DE/images/record-button.png new file mode 100644 index 0000000..4db43f0 Binary files /dev/null and b/de-DE/images/record-button.png differ diff --git a/de-DE/images/score-hint.png b/de-DE/images/score-hint.png new file mode 100644 index 0000000..361b149 Binary files /dev/null and b/de-DE/images/score-hint.png differ diff --git a/de-DE/images/start-listening.png b/de-DE/images/start-listening.png new file mode 100644 index 0000000..517dc83 Binary files /dev/null and b/de-DE/images/start-listening.png differ diff --git a/de-DE/images/starter-code.png b/de-DE/images/starter-code.png new file mode 100644 index 0000000..b9ff8c1 Binary files /dev/null and b/de-DE/images/starter-code.png differ diff --git a/de-DE/images/test-your-model.png b/de-DE/images/test-your-model.png new file mode 100644 index 0000000..84c6652 Binary files /dev/null and b/de-DE/images/test-your-model.png differ diff --git a/de-DE/images/train-new-model.png b/de-DE/images/train-new-model.png new file mode 100644 index 0000000..62ce33f Binary files /dev/null and b/de-DE/images/train-new-model.png differ diff --git a/de-DE/images/whatyouwillmake.png b/de-DE/images/whatyouwillmake.png new file mode 100644 index 0000000..b11ec59 Binary files /dev/null and b/de-DE/images/whatyouwillmake.png differ diff --git a/de-DE/meta.yml b/de-DE/meta.yml new file mode 100644 index 0000000..b8ead11 --- /dev/null +++ b/de-DE/meta.yml @@ -0,0 +1,22 @@ +title: Fischfutter +hero_image: images/banner.png +description: Steuere einen Fisch mit deiner Stimme allein und leite ihn zum Futter +version: 1 +listed: true +copyedit: false +last_tested: "2024-06-04" +steps: + - title: Das wirst du machen + - title: Projekt einrichten + completion: + - engaged + - title: Hintergrundgeräusche + - title: Nimm the Anweisungen auf + - title: Modell trainieren + - title: Bewege den Fisch + completion: + - internal + - title: Herausforderung + challenge: true + completion: + - extern diff --git a/de-DE/resources/NEW Fish 4.srt b/de-DE/resources/NEW Fish 4.srt new file mode 100644 index 0000000..569b900 --- /dev/null +++ b/de-DE/resources/NEW Fish 4.srt @@ -0,0 +1,24 @@ +1 +00:00:04,280 --> 00:00:09,120 +Klicke auf 'Zurück zum Projekt', +dann auf 'Lernen & Testen'. + +2 +00:00:09,120 --> 00:00:16,520 +Trainiere dein neues maschinelles Lernmodell +- es kann einige Minuten dauern. + +3 +00:00:16,520 --> 00:00:19,720 +Klicke auf 'Starte Zuhören' und dann sag 'links'. + +4 +00:00:19,720 --> 00:00:23,360 +Teste, ob das Modell +erkennt, wenn du "links" sagst. + +5 +00:00:23,360 --> 00:00:29,960 +Überprüfe auch, ob das Modell +'rechts', 'hoch' und 'runter' erkennt. + diff --git a/de-DE/resources/NEW Fish 5.srt b/de-DE/resources/NEW Fish 5.srt new file mode 100644 index 0000000..6cc1b68 --- /dev/null +++ b/de-DE/resources/NEW Fish 5.srt @@ -0,0 +1,64 @@ +1 +00:00:05,480 --> 00:00:10,840 +Klicke auf 'Zurück zum Projekt', dann auf 'Erstellen'. + +2 +00:00:10,840 --> 00:00:15,400 +Du wirst das Modell in Scratch 3 verwenden. + +3 +00:00:15,400 --> 00:00:24,280 +Gehe zu 'Projektvorlagen' und +finde die Vorlage für Fischfutter. + +4 +00:00:24,280 --> 00:00:26,560 +Ein bisschen Code wurde bereits hinzugefügt. + +5 +00:00:26,560 --> 00:00:33,000 +Öffne das Menü für bestimmte Machine Learning for +Kids Blöcke, und ziehe einen "wenn ich hoch höre" Baustein. + +6 +00:00:33,000 --> 00:00:40,960 +Füge Code hinzu, damit sich der Fisch nach oben bewegt, wenn du +das Wort 'hoch' sagst. + +7 +00:00:40,960 --> 00:00:45,840 +Mach das Gleiche für unten. + +8 +00:00:45,840 --> 00:00:52,960 +Füge auch Code hinzu für links und rechts. + +9 +00:00:52,960 --> 00:00:55,000 +Jetzt ist es Zeit, das Modell zu testen. + +10 +00:00:55,000 --> 00:00:59,480 +Klicke auf die grüne Flagge und sage dann 'hoch' + +11 +00:00:59,480 --> 00:00:59,920 +'runter' + +12 +00:01:01,680 --> 00:01:03,200 +'links' + +13 +00:01:03,200 --> 00:01:04,280 +und 'rechts'. + +14 +00:01:04,280 --> 00:01:06,360 +Schau dir an wie sich dein Fisch bewegt! + +15 +00:01:06,360 --> 00:01:13,520 +Verwende deine Stimme, um die +Fische zu bewegen und das fallende Essen zu essen. + diff --git a/de-DE/resources/NEW Fish food 1.srt b/de-DE/resources/NEW Fish food 1.srt new file mode 100644 index 0000000..8dc1b2c --- /dev/null +++ b/de-DE/resources/NEW Fish food 1.srt @@ -0,0 +1,29 @@ +1 +00:00:03,760 --> 00:00:07,480 +Gehe zu rpf.io/ml4k + +2 +00:00:07,480 --> 00:00:10,920 +Klicke auf 'Los geht', dann 'Teste es jetzt'. + +3 +00:00:10,920 --> 00:00:16,560 +Füge ein neues Projekt hinzu, nenne es 'Fischfutter', +und stelle ein, dass es Geräusche erkennen lernt. + +4 +00:00:16,560 --> 00:00:19,440 +Speichere Daten in deinem Webbrowser. + +5 +00:00:19,440 --> 00:00:21,520 +Klicke auf den Namen des Projekts, + +6 +00:00:21,520 --> 00:00:23,160 +dann klicke auf 'Trainieren'. + +7 +00:00:23,160 --> 00:00:29,640 +Erlaube den Zugriff auf das Mikrofon, wenn du gefragt wirst. + diff --git a/de-DE/resources/NEW Fish food 2.srt b/de-DE/resources/NEW Fish food 2.srt new file mode 100644 index 0000000..b93073f --- /dev/null +++ b/de-DE/resources/NEW Fish food 2.srt @@ -0,0 +1,12 @@ +1 +00:00:04,560 --> 00:00:06,960 +Nun, füge ein Beispiel für Hintergrundgeräusche hinzu + +2 +00:00:06,960 --> 00:00:11,680 +- also sage nichts, während du aufnimmst. + +3 +00:00:11,680 --> 00:00:20,160 +Du brauchst acht Beispiele. + diff --git a/de-DE/resources/NEW Fish food 3.srt b/de-DE/resources/NEW Fish food 3.srt new file mode 100644 index 0000000..fc63603 --- /dev/null +++ b/de-DE/resources/NEW Fish food 3.srt @@ -0,0 +1,41 @@ +1 +00:00:00,400 --> 00:00:05,080 +Jetzt ist es Zeit, Trainingsbeispiele hinzuzufügen +für deine eigentlichen Befehle. + +2 +00:00:05,080 --> 00:00:09,840 +Zuerst wirst du eine Beschriftung links hinzufügen. + +3 +00:00:09,840 --> 00:00:14,800 +Jetzt nimm dich auf wie du 'links' sagst. + +4 +00:00:14,800 --> 00:00:21,400 +Wiederholen dies, bis du +acht verschiedene Beispiele hast. + +5 +00:00:21,400 --> 00:00:27,280 +Dann, wiederhole dies für rechts. + +6 +00:00:27,280 --> 00:00:29,960 +Und du wirst auch dafür +acht Beispiele hinzufügen. + +7 +00:00:34,360 --> 00:00:41,880 +Und dann für oben und unten, füge acht +Beispiele hinzu wie du das Wort 'hoch' + +8 +00:00:41,880 --> 00:00:43,280 +und das Wort 'runter'. + +9 +00:00:43,280 --> 00:00:54,400 +Du fügst acht hinzu, sodass es +genug Daten gibt, mit denen du dein Modell trainieren kannst. + diff --git a/de-DE/resources/fish-food-starter.sb3 b/de-DE/resources/fish-food-starter.sb3 new file mode 100644 index 0000000..31c1639 Binary files /dev/null and b/de-DE/resources/fish-food-starter.sb3 differ diff --git a/de-DE/resources/readme.txt b/de-DE/resources/readme.txt new file mode 100644 index 0000000..8d1c34b --- /dev/null +++ b/de-DE/resources/readme.txt @@ -0,0 +1 @@ +Um ein Video mit Untertiteln auf VLC (videolan.org) anzuschauen, stelle sicher, dass sich die Videodatei und die Untertiteldatei im selben Ordner befinden und genau den gleichen Namen haben (z.B. video.mp4 und video.srt). Öffne das Video in VLC, und es wird die Untertitel automatisch laden. Wenn die Untertitel nicht erscheinen, klicke mit der rechten Maustaste auf den Videobildschirm, klicke auf **Untertitel**, dann **Untertiteldatei hinzufügen** und wähle die korrekte .srt-Datei aus. Viel Spaß beim Anschauen mit Untertiteln! \ No newline at end of file diff --git a/de-DE/step_1.md b/de-DE/step_1.md new file mode 100644 index 0000000..da310f2 --- /dev/null +++ b/de-DE/step_1.md @@ -0,0 +1,26 @@ +## Einleitung + +Trainiere ein maschinelles Lernmodell, um die Sprachbefehle „hoch“, „runter“, „links“ und „rechts“ zu erkennen und damit einen Fisch in einem lustigen Spiel zu steuern. + +Du benötigst ein **Mikrofon** + +![Ein Scratch-Projekt mit einem Clownfisch und einem Donut in einer Unterwasserszene.](images/whatyouwillmake.png) + +\--- collapse --- + +--- + +## title: Wo werden meine Sprachbefehle gespeichert? + +- Dieses Projekt verwendet eine Technologie namens „Maschinelles Lernen“ (Machine Learning). Systeme für maschinelles Lernen werden mit großer Datenmenge trainiert. +- Für dieses Projekt ist weder die Erstellung eines Kontos noch eine Anmeldung erforderlich. Für dieses Projekt werden die Beispiele für die Modellerstellung nur vorübergehend im Browser gespeichert (nur auf deinem Computer). + +\--- /collapse --- + +## --- collapse --- + +## title: Kein YouTube? Videos herunterladen! + +Du kannst [alle Videos zu diesem Projekt herunterladen](https://rpf.io/p/en/fish-food-go){:target="_blank"}. + +\--- /collapse --- diff --git a/de-DE/step_2.md b/de-DE/step_2.md new file mode 100644 index 0000000..9bc2d21 --- /dev/null +++ b/de-DE/step_2.md @@ -0,0 +1,43 @@ +## Projekt einrichten + + +
+

+
+ + +\--- task --- + +Gehe zu [machinelearningforkids.co.uk](https://machinelearningforkids.co.uk/#!/login){:target="_blank"} in einem Webbrowser. + +Klicke auf **Jetzt ausprobieren**. + +\--- /task --- + +\--- task --- + +Klicke in der Menüleiste oben auf **Projekte**. + +Klicke auf den Knopf **+ Neues Projekt hinzufügen**. + +Benenne dein Projekt `Fischfutter` und lernen Sie **Sounds** zu erkennen und speichern Sie die Daten **in Ihrem Webbrowser**. Dann klicke auf **Erstellen**. +![Projekt erstellen](images/create-project.png) + +In der Projektliste solltest du jetzt "Fisch Futter" sehen. Klicke auf das Projekt. +![Projektliste mit Fisch Futter gelistet.](images/projects-list.png) + +\--- /task --- + +\--- task --- + +Klicke auf den **Trainieren** Button. +![Hauptmenü des Projekts mit Pfeil, der auf den Trainieren Knopf zeigt.](images/project-train.png) + +Wenn du eine Pop-Up-Nachricht siehst, die dich fragt ein Mikrofon zu verwenden, klicke auf **Erlaube bei jedem Besuch**. + +![Pop-up Nachricht mit der Frage ein Mikrofon zu benutzen zu dürfen.](images/allow-microphone.png) + +\--- /task --- + + + diff --git a/de-DE/step_3.md b/de-DE/step_3.md new file mode 100644 index 0000000..a5129d2 --- /dev/null +++ b/de-DE/step_3.md @@ -0,0 +1,27 @@ +## Hintergrundgeräusche + + +
+

+
+ + +Zuerst sammle Beispiele von Hintergrundgeräuschen. Das hilft deinem maschinellen Lernmodell, zwischen Ihren Sprachbefehlen und den Hintergrundgeräuschen an deinem Standort zu unterscheiden. + +\--- task --- + +Klicke den **+ Beispiele hinzufügen** Knopf in **Hintergrundgeräusche**. + +Klicke auf das Mikrofon, aber spreche nicht, um 2 Sekunden Hintergrundgeräusche aufzunehmen. +![Pfeil zeigt auf den Mikrofonknopf.](images/record-button.png) + +Klicke auf **Hinzufügen** um deine Aufnahme zu speichern. + +\--- /task --- + +\--- task --- + +Wiederhole diese Schritte, bis du **mindestens 8 Beispiele** von Hintergrundgeräuschen hast. +![Eimer gefüllt mit 8 Beispielen für Hintergrundgeräusche.](images/8-background.png) + +\--- /task --- diff --git a/de-DE/step_4.md b/de-DE/step_4.md new file mode 100644 index 0000000..7b4cbad --- /dev/null +++ b/de-DE/step_4.md @@ -0,0 +1,42 @@ +## Nimm the Anweisungen auf + + +
+

+
+ + +Jetzt nimm 8 Beispiele für jedes Wort ('hoch', 'runter', 'links' und 'rechts') auf, sodass dein maschinelles Lernmodell lernen kann sie zu erkennen. + +\--- task --- + +Klicke auf **+ Neue Beschriftung hinzufügen** oben rechts auf dem Bildschirm und füge die Beschriftung `links` hinzu. + +\--- /task --- + +\--- task --- + +Klicke auf **+ Beispiel hinzufügen** in der Box für die neue Beschreibung `links` und nehme dich auf wie du "links" sagst. + +Wiederhole das bis du **mindestens 8 Beispielen** aufgezeichnet hast. + +\--- /task --- + +\--- task --- + +**+ Neue Beschriftung hinzufügen**, um die Beschriftung `rechts` zu erstellen und 8 Beispiele aufzunehmen wie du "rechts" sagst. + +\--- /task --- + +\--- task --- + +**+ Neue Beschriftung hinzufügen**, um die Beschriftung `hoch` zu erstellen und 8 Beispiele aufzunehmen wie du "hoch" sagst. + +\--- /task --- + +\--- task --- + +**+ Neue Beschriftung hinzufügen**, um die Beschriftung `runter` zu erstellen und 8 Beispiele aufzunehmen wie du "runter" sagst. + +\--- /task --- + diff --git a/de-DE/step_5.md b/de-DE/step_5.md new file mode 100644 index 0000000..e405e05 --- /dev/null +++ b/de-DE/step_5.md @@ -0,0 +1,42 @@ +## Modell trainieren + + +
+

+
+ + +Du hast jetzt genügend Beispiele gesammelt, um mit diesen nun dein maschinelles Lernmodell zu trainieren. + +\--- task --- + +Klicke auf **< Zurück zum Projekt** in der oberen linken Ecke. + +Klicke auf **Lernen & Testen**. + +Klicke auf den Knopf **Neues maschinelles Lernmodell trainieren**. Dies kann einige Minuten dauern. +![Pfeil zeigt auf den Knopf 'Neues Maschinen-Lernmodell trainieren'.](images/train-new-model.png) + +\--- /task --- + +Sobald das Training beendet ist, kannst du testen, wie gut dein Modell deine Sprachbefehle erkennt. + +\--- task --- + +Klicke auf den **Starte Zuhören** Knopf und sage dann "Links". + +\--- /task --- + +Wenn dein maschinelles Lernmodell dies erkennt, wird angezeigt, was es denkt, dass du gesagt hast. +![Pfeil zeigt auf den Starte Anhören Button.](images/test-your-model.png) + +\--- task --- + +Teste, ob das Modell auch "hoch", "runter" und "rechts" erkennt. + +\--- /task --- + +Wenn du nicht zufrieden bist wie dein Modell funktioniert, gehe zurück auf die **Trainieren** Seite und füge weitere Beispiele hinzu, dann trainiere dein Modell erneut. + + + diff --git a/de-DE/step_6.md b/de-DE/step_6.md new file mode 100644 index 0000000..0c29e54 --- /dev/null +++ b/de-DE/step_6.md @@ -0,0 +1,71 @@ +## Bewege den Fisch + + +
+

+
+ + +Weil dein Modell nun zwischen Wörtern unterscheiden kann, kannst du es in einem Scratch-Programm verwenden, um einen Fisch auf dem Bildschirm umher zubewegen. + +\--- task --- + +Klicke auf den **< Zurück zum Projekt** Link. + +Klicke auf **Erstellen**. + +Klicke auf **Scratch 3**. + +Klicke auf **Öffne in Scratch 3**. + +\--- /task --- + +\--- task --- + +Klicke auf **Projektvorlagen** oben und wähle das Projekt 'Fisch Futter', um eine Fischsprite zu laden, das bereits Code beinhaltet. + +\--- /task --- + +Machine Learning for Kids hat in Scratch schon einige bestimmte Blöcke hinzugefügt, mit denen du dein Modell trainieren kannst. Finde sie am unteren Rand der Blockliste. + +![Eine Liste neuer Blöcke, die von Machine Learning for Kids erstellt wurden, inklusive Anweisungen wie 'Starte zuhören', 'Stoppe zuhören' und 'Wenn ich links höre'. (images/new-blocks.png) + +\--- task --- + +Wenn die **Fisch** Sprite ausgewählt ist, klicke auf das Tab **Code**. Finde die richtige Stelle im Code und füge einen bestimmten Block hinzu, um dem Modell zu sagen, dass es mit dem Zuhören beginnen soll. + +![In der Fischsprite wird nach dem "Wenn die Flagge angeklickt wird"-Baustein ein 'Starte Zuhören'-Baustein hinzugefügt. (images/start-listening.png) + +\--- /task --- + +\--- task --- + +Füge den Code für 'hoch' zu der **Fisch** Sprite hinzu. +![In der Fischsprite, wird ein Baustein 'wenn ich hoch höre' hinzugefügt, dann ein 'Zeige in Richtung 0'-Baustein.](images/starter-code.png) + +\--- /task --- + +\--- task --- + +Schaue dir den Code an, um den Fisch nach oben zu bewegen, dann probiere den Code zu schreiben für unten, links und rechts. + +## --- collapse --- + +## title: Zeige mir wie + +![Drei weitere Blöcke werden hinzugefügt: 'Wenn ich links höre' und 'Zeige in Richtung -90'; "Wenn ich rechts höre" und "Zeige in Richtung 90"; "Wenn ich unten höre" und "Zeige in Richtung 180". (images/finished-code.png) + +\--- /collapse --- + +\--- /task --- + +\--- task --- + +Klicke auf die **grüne Flagge** und sage, hoch, runter, links, oder rechts. Überprüfe, ob sich der Fisch in die von dir erwartete Richtung bewegt. + +\--- /task --- + + + + + diff --git a/de-DE/step_7.md b/de-DE/step_7.md new file mode 100644 index 0000000..cfaa115 --- /dev/null +++ b/de-DE/step_7.md @@ -0,0 +1,39 @@ +## Herausforderung + +\--- challenge --- + +\--- task --- + +Füge eine Variable hinzu, um Punkte zu zählen und füge jedes Mal einen Punkt hinzu, wenn der Fisch etwas Futter frisst. + +## --- collapse --- + +## title: Zeige mir wie + +Füge den eingekreisten Code zur **Futter** Sprite hinzu. + +![Scratch-Code: Setze Punktzahl auf 0, zeige, wiederhole bis y Position < -170, ändere y um -3, wenn sich die Fische berühren, ändere die Punktzahl um 1, verstecken. (images/score-hint.png) + +\--- /collapse --- + +\--- /task --- + +\--- task --- + +Füge eine neue Sprite hinzu, die nicht Nahrung ist, und ziehen Punkte ab, wenn der Fisch diese isst. + +\--- /task --- + +\--- task --- + +Lass das Essen mit verschiedenen zufälligen Geschwindigkeiten fallen. + +\--- /task --- + +\--- task --- + +Oder, wenn dir das lieber ist, erstelle ein völlig neues Spiel, das Sprachbefehle verwendet, um eine Figur zu kontrollieren! + +\--- /task --- + +\--- /challenge --- diff --git a/de-DE/step_8.md b/de-DE/step_8.md new file mode 100644 index 0000000..3af9741 --- /dev/null +++ b/de-DE/step_8.md @@ -0,0 +1,3 @@ +## Was kommt als Nächstes? + +Es gibt viele andere Maschinelles Lernen und AI Projekte auf [Machine learning with Scratch](https://projects.raspberrypi.org/en/pathways/scratch-machine-learning). diff --git a/es-LA/images/8-background.png b/es-LA/images/8-background.png new file mode 100644 index 0000000..d27ef20 Binary files /dev/null and b/es-LA/images/8-background.png differ diff --git a/es-LA/images/allow-microphone.png b/es-LA/images/allow-microphone.png new file mode 100644 index 0000000..ea43466 Binary files /dev/null and b/es-LA/images/allow-microphone.png differ diff --git a/es-LA/images/banner.png b/es-LA/images/banner.png new file mode 100644 index 0000000..0a26f21 Binary files /dev/null and b/es-LA/images/banner.png differ diff --git a/es-LA/images/create-project.png b/es-LA/images/create-project.png new file mode 100644 index 0000000..1a48e1c Binary files /dev/null and b/es-LA/images/create-project.png differ diff --git a/es-LA/images/finished-code.png b/es-LA/images/finished-code.png new file mode 100644 index 0000000..502be93 Binary files /dev/null and b/es-LA/images/finished-code.png differ diff --git a/es-LA/images/new-blocks.png b/es-LA/images/new-blocks.png new file mode 100644 index 0000000..593366e Binary files /dev/null and b/es-LA/images/new-blocks.png differ diff --git a/es-LA/images/project-train.png b/es-LA/images/project-train.png new file mode 100644 index 0000000..660489d Binary files /dev/null and b/es-LA/images/project-train.png differ diff --git a/es-LA/images/projects-list.png b/es-LA/images/projects-list.png new file mode 100644 index 0000000..6938954 Binary files /dev/null and b/es-LA/images/projects-list.png differ diff --git a/es-LA/images/record-button.png b/es-LA/images/record-button.png new file mode 100644 index 0000000..4db43f0 Binary files /dev/null and b/es-LA/images/record-button.png differ diff --git a/es-LA/images/score-hint.png b/es-LA/images/score-hint.png new file mode 100644 index 0000000..361b149 Binary files /dev/null and b/es-LA/images/score-hint.png differ diff --git a/es-LA/images/start-listening.png b/es-LA/images/start-listening.png new file mode 100644 index 0000000..517dc83 Binary files /dev/null and b/es-LA/images/start-listening.png differ diff --git a/es-LA/images/starter-code.png b/es-LA/images/starter-code.png new file mode 100644 index 0000000..b9ff8c1 Binary files /dev/null and b/es-LA/images/starter-code.png differ diff --git a/es-LA/images/test-your-model.png b/es-LA/images/test-your-model.png new file mode 100644 index 0000000..84c6652 Binary files /dev/null and b/es-LA/images/test-your-model.png differ diff --git a/es-LA/images/train-new-model.png b/es-LA/images/train-new-model.png new file mode 100644 index 0000000..62ce33f Binary files /dev/null and b/es-LA/images/train-new-model.png differ diff --git a/es-LA/images/whatyouwillmake.png b/es-LA/images/whatyouwillmake.png new file mode 100644 index 0000000..b11ec59 Binary files /dev/null and b/es-LA/images/whatyouwillmake.png differ diff --git a/es-LA/meta.yml b/es-LA/meta.yml new file mode 100644 index 0000000..61aeaf7 --- /dev/null +++ b/es-LA/meta.yml @@ -0,0 +1,22 @@ +title: Fish food +hero_image: images/banner.png +description: Control a fish using only your voice and direct it to the food +version: 1 +listed: true +copyedit: false +last_tested: "2024-06-04" +steps: + - title: What you will make + - title: Set up the project + completion: + - engaged + - title: Background noise + - title: Record the directions + - title: Train the model + - title: Move the fish + completion: + - internal + - title: Challenge + challenge: true + completion: + - external diff --git a/es-LA/resources/NEW Fish 4.srt b/es-LA/resources/NEW Fish 4.srt new file mode 100644 index 0000000..2f9c01e --- /dev/null +++ b/es-LA/resources/NEW Fish 4.srt @@ -0,0 +1,24 @@ +1 +00:00:04,280 --> 00:00:09,120 +Click on 'Back to project',  +then click on 'Learn & Test'. + +2 +00:00:09,120 --> 00:00:16,520 +Train your new machine learning  +model - it might take a few minutes. + +3 +00:00:16,520 --> 00:00:19,720 +Click on 'Start listening' and then say 'left'. + +4 +00:00:19,720 --> 00:00:23,360 +Test whether the model  +recognises you saying 'left'. + +5 +00:00:23,360 --> 00:00:29,960 +Also check whether the model  +recognises 'right', 'up' and 'down'. + diff --git a/es-LA/resources/NEW Fish 5.srt b/es-LA/resources/NEW Fish 5.srt new file mode 100644 index 0000000..c07c129 --- /dev/null +++ b/es-LA/resources/NEW Fish 5.srt @@ -0,0 +1,64 @@ +1 +00:00:05,480 --> 00:00:10,840 +Click on 'Back to project', then click on 'Make'. + +2 +00:00:10,840 --> 00:00:15,400 +You're going to use the model in Scratch 3. + +3 +00:00:15,400 --> 00:00:24,280 +Go to 'Project templates' and  +find the template for Fish Food. + +4 +00:00:24,280 --> 00:00:26,560 +Some code has been added for you. + +5 +00:00:26,560 --> 00:00:33,000 +Open the menu of special Machine Learning for  +Kids blocks, and drag a 'when I hear up' block. + +6 +00:00:33,000 --> 00:00:40,960 +Now add some code so that when you say  +the word 'up', the fish moves upwards. + +7 +00:00:40,960 --> 00:00:45,840 +Do the same for down. + +8 +00:00:45,840 --> 00:00:52,960 +Add some code for left and right too. + +9 +00:00:52,960 --> 00:00:55,000 +Now it's time to test the model. + +10 +00:00:55,000 --> 00:00:59,480 +Click the green flag and then say 'up' + +11 +00:00:59,480 --> 00:00:59,920 +'down' + +12 +00:01:01,680 --> 00:01:03,200 +'left' + +13 +00:01:03,200 --> 00:01:04,280 +and 'right'. + +14 +00:01:04,280 --> 00:01:06,360 +Watch your fish move! + +15 +00:01:06,360 --> 00:01:13,520 +Use your voice to move the  +fish and eat the falling food. + diff --git a/es-LA/resources/NEW Fish food 1.srt b/es-LA/resources/NEW Fish food 1.srt new file mode 100644 index 0000000..adbbe6f --- /dev/null +++ b/es-LA/resources/NEW Fish food 1.srt @@ -0,0 +1,29 @@ +1 +00:00:03,760 --> 00:00:07,480 +Go to rpf.io/ml4k + +2 +00:00:07,480 --> 00:00:10,920 +Click on 'Get started', then 'Try it now'. + +3 +00:00:10,920 --> 00:00:16,560 +Add a new project, call it 'Fish food',  +and set it to learn to recognise sounds. + +4 +00:00:16,560 --> 00:00:19,440 +Store the data in your web browser. + +5 +00:00:19,440 --> 00:00:21,520 +Click on the name of the project, + +6 +00:00:21,520 --> 00:00:23,160 +then click 'Train'. + +7 +00:00:23,160 --> 00:00:29,640 +Allow microphone access if you are asked. + diff --git a/es-LA/resources/NEW Fish food 2.srt b/es-LA/resources/NEW Fish food 2.srt new file mode 100644 index 0000000..d275b89 --- /dev/null +++ b/es-LA/resources/NEW Fish food 2.srt @@ -0,0 +1,12 @@ +1 +00:00:04,560 --> 00:00:06,960 +Now, add an example of background noise + +2 +00:00:06,960 --> 00:00:11,680 +- so don't say anything when you record. + +3 +00:00:11,680 --> 00:00:20,160 +You're going to need eight examples. + diff --git a/es-LA/resources/NEW Fish food 3.srt b/es-LA/resources/NEW Fish food 3.srt new file mode 100644 index 0000000..dfcdb51 --- /dev/null +++ b/es-LA/resources/NEW Fish food 3.srt @@ -0,0 +1,41 @@ +1 +00:00:00,400 --> 00:00:05,080 +So now it's time to add training  +samples for your actual commands. + +2 +00:00:05,080 --> 00:00:09,840 +First, you're going to add a label called left. + +3 +00:00:09,840 --> 00:00:14,800 +Now record yourself saying the word 'left'. + +4 +00:00:14,800 --> 00:00:21,400 +Repeat this until you have  +eight different examples. + +5 +00:00:21,400 --> 00:00:27,280 +Then do the same for right. + +6 +00:00:27,280 --> 00:00:29,960 +And you're also going to add  +eight examples for that one. + +7 +00:00:34,360 --> 00:00:41,880 +And then up and down, add eight  +examples of you saying the word 'up', + +8 +00:00:41,880 --> 00:00:43,280 +and the word 'down'. + +9 +00:00:43,280 --> 00:00:54,400 +You're adding eight so there's  +enough data to train your model with. + diff --git a/es-LA/resources/fish-food-starter.sb3 b/es-LA/resources/fish-food-starter.sb3 new file mode 100644 index 0000000..31c1639 Binary files /dev/null and b/es-LA/resources/fish-food-starter.sb3 differ diff --git a/es-LA/resources/readme.txt b/es-LA/resources/readme.txt new file mode 100644 index 0000000..0e0956c --- /dev/null +++ b/es-LA/resources/readme.txt @@ -0,0 +1 @@ +To watch a video with subtitles on VLC (videolan.org), ensure the video file and subtitle file are in the same folder and have the exact same name (e.g., video.mp4 and video.srt). Open the video in VLC, and it will automatically load the subtitles. If the subtitles don’t appear, right-click the video screen, go to **Subtitle**, then **Add Subtitle File**, and select the correct .srt file. Enjoy watching with subtitles! \ No newline at end of file diff --git a/es-LA/step_1.md b/es-LA/step_1.md new file mode 100644 index 0000000..763490f --- /dev/null +++ b/es-LA/step_1.md @@ -0,0 +1,26 @@ +## Introduction + +Train a machine learning model to recognise voice commands 'up', 'down', 'left', and 'right', and use them to control a fish in a fun game. + +You will need a **microphone**. + +![A Scratch project with a clownfish and a doughnut in an underwater scene.](images/whatyouwillmake.png) + +\--- collapse --- + +--- + +## title: Where are my voice commands stored? + +- This project uses a technology called 'machine learning'. Machine learning systems are trained using a large amount of data. +- This project does not require you to create an account or log in. For this project, the examples you use to make the model are only stored temporarily in your browser (only on your machine). + +\--- /collapse --- + +## --- collapse --- + +## title: No YouTube? Download the videos! + +You can [download all the videos for this project](https://rpf.io/p/en/fish-food-go){:target="_blank"}. + +\--- /collapse --- diff --git a/es-LA/step_2.md b/es-LA/step_2.md new file mode 100644 index 0000000..ff86a5e --- /dev/null +++ b/es-LA/step_2.md @@ -0,0 +1,43 @@ +## Set up the project + + +
+

+
+ + +\--- task --- + +Go to [machinelearningforkids.co.uk](https://machinelearningforkids.co.uk/#!/login){:target="_blank"} in a web browser. + +Click on **Try it now**. + +\--- /task --- + +\--- task --- + +Click on **Projects** in the menu bar at the top. + +Click on the **+ Add a new project** button. + +Name your project `Fish food` and set it to learn to recognise **sounds**, and store data **in your web browser**. Then click on **Create**. +![Creating a project](images/create-project.png) + +You should now see 'Fish food' in the projects list. Click on the project. +![Project list with Fish food listed.](images/projects-list.png) + +\--- /task --- + +\--- task --- + +Click on the **Train** button. +![Project main menu with an arrow pointing to the Train button.](images/project-train.png) + +If you see a pop-up message asking to use the microphone, click on **Allow on every visit**. + +![Pop-up message asking to allow microphone use.](images/allow-microphone.png) + +\--- /task --- + + + diff --git a/es-LA/step_3.md b/es-LA/step_3.md new file mode 100644 index 0000000..f34e1a8 --- /dev/null +++ b/es-LA/step_3.md @@ -0,0 +1,27 @@ +## Background noise + + +
+

+
+ + +First, you will collect samples of background noise. This will help your machine learning model to tell the difference between your voice commands, and the background noise where you are. + +\--- task --- + +Click the **+ Add example** button in **background noise**. + +Click on the microphone but don't say anything to record 2 seconds of background noise. +![Arrow pointing to microphone button.](images/record-button.png) + +Click the **Add** button to save your recording. + +\--- /task --- + +\--- task --- + +Repeat those steps until you have **at least 8 examples** of background noise. +![Bucket filled with 8 background noise examples.](images/8-background.png) + +\--- /task --- diff --git a/es-LA/step_4.md b/es-LA/step_4.md new file mode 100644 index 0000000..b9dc8e6 --- /dev/null +++ b/es-LA/step_4.md @@ -0,0 +1,42 @@ +## Record the directions + + +
+

+
+ + +Now you will record 8 examples of each word ('up', 'down', 'left', and 'right') so that your machine learning model can learn to recognise them. + +\--- task --- + +Click on **+ Add new label** on the top right of the screen and add a label called `left`. + +\--- /task --- + +\--- task --- + +Click on **+ Add example** inside the box for the new `left` label, and record yourself saying "left". + +Repeat until you have recorded **at least 8 examples**. + +\--- /task --- + +\--- task --- + +**+ Add new label** to create another label called `right` and record 8 examples of you saying "right". + +\--- /task --- + +\--- task --- + +**+ Add new label** to create another label called `up` and record 8 examples of you saying "up". + +\--- /task --- + +\--- task --- + +**+ Add new label** to create another label called `down` and record 8 examples of you saying "down". + +\--- /task --- + diff --git a/es-LA/step_5.md b/es-LA/step_5.md new file mode 100644 index 0000000..e4be274 --- /dev/null +++ b/es-LA/step_5.md @@ -0,0 +1,42 @@ +## Train the model + + +
+

+
+ + +You have gathered the examples you need, now you will use these examples to train your machine learning model. + +\--- task --- + +Click on **< Back to project** in the top left-hand corner. + +Click on **Learn & Test**. + +Click on the button labelled **Train new machine learning model**. This may take a few minutes to complete. +![Arrow pointing to a button saying 'Train new machine learning model'.](images/train-new-model.png) + +\--- /task --- + +Once the training has finished, you can test how well your model recognises your voice commands. + +\--- task --- + +Click the **Start listening** button, then say "left". + +\--- /task --- + +If your machine learning model recognises it, it will display what it predicts you said. +![Arrow pointing to the start listening button.](images/test-your-model.png) + +\--- task --- + +Test whether the model recognises "up", "down", and "right" as well. + +\--- /task --- + +If you are not happy with how the model works, go back to the **Train** page and add more examples, then train your model again. + + + diff --git a/es-LA/step_6.md b/es-LA/step_6.md new file mode 100644 index 0000000..0d586a4 --- /dev/null +++ b/es-LA/step_6.md @@ -0,0 +1,71 @@ +## Move the fish + + +
+

+
+ + +Now that your model can distinguish between words, you can use it in a Scratch program to move a fish around the screen. + +\--- task --- + +Click on the **< Back to project** link. + +Click on **Make**. + +Click on **Scratch 3**. + +Click on **Open in Scratch 3**. + +\--- /task --- + +\--- task --- + +Click on **Project templates** at the top and select the 'Fish food' project to load the fish sprite, which has some code already added to it. + +\--- /task --- + +Machine Learning for Kids has added some special blocks to Scratch to allow you to use the model you just trained. Find them at the bottom of the blocks list. + +![A list of new blocks created by Machine Learning for Kids, including instructions such as 'Start listening', 'Stop listening', and 'When I hear left'.](images/new-blocks.png) + +\--- task --- + +With the **fish** sprite selected, click on the **Code** tab. Find the right place in the code and add a special block to tell the model to start listening. + +![In the fish sprite, a 'start listening' block is added after the 'when flag clicked' block.](images/start-listening.png) + +\--- /task --- + +\--- task --- + +Add the code for 'up' to the **Fish** sprite. +![In the fish sprite, a 'when I hear up' block is added, then a 'point in direction 0' block.](images/starter-code.png) + +\--- /task --- + +\--- task --- + +Look at the code you have to move the fish up, then see if you can work out the code for down, left, and right. + +## --- collapse --- + +## title: Show me how + +![Three more pairs of blocks are added: 'When I hear left' and 'point in direction -90'; 'When I hear right' and 'point in direction 90'; 'When I hear down' and 'point in direction 180'.](images/finished-code.png) + +\--- /collapse --- + +\--- /task --- + +\--- task --- + +Click the **green flag** and say up, down, left, or right. Check that the fish moves in the direction you expected. + +\--- /task --- + + + + + diff --git a/es-LA/step_7.md b/es-LA/step_7.md new file mode 100644 index 0000000..6e10a97 --- /dev/null +++ b/es-LA/step_7.md @@ -0,0 +1,39 @@ +## Challenge + +\--- challenge --- + +\--- task --- + +Add a variable to keep track of the score, and add a point each time the fish eats some food. + +## --- collapse --- + +## title: Show me how + +Add the circled code to the **Food** sprite. + +![Scratch code: Set score to 0, show, repeat until y position < -170, change y by -3, if touching fish then change score by 1, hide.](images/score-hint.png) + +\--- /collapse --- + +\--- /task --- + +\--- task --- + +Add a new sprite that is not food, and deduct points if the fish eats it. + +\--- /task --- + +\--- task --- + +Make the food fall at different random speeds. + +\--- /task --- + +\--- task --- + +Or, if you prefer, make a completely different game that uses voice commands to control a character! + +\--- /task --- + +\--- /challenge --- diff --git a/es-LA/step_8.md b/es-LA/step_8.md new file mode 100644 index 0000000..d4b22e9 --- /dev/null +++ b/es-LA/step_8.md @@ -0,0 +1,3 @@ +## What can you do now? + +There are lots of other machine learning and AI projects in the [Machine learning with Scratch](https://projects.raspberrypi.org/en/pathways/scratch-machine-learning) pathway. diff --git a/fr-FR/meta.yml b/fr-FR/meta.yml index 24f54e0..72b542e 100644 --- a/fr-FR/meta.yml +++ b/fr-FR/meta.yml @@ -1,4 +1,3 @@ ---- title: Nourriture pour poissons hero_image: images/banner.png description: Contrôle un poisson en utilisant uniquement ta voix et dirige-le vers la nourriture @@ -10,14 +9,14 @@ steps: - title: Ce que tu vas faire - title: Configurer le projet completion: - - engaged + - engaged - title: Bruit de fond - title: Enregistrer les directions - title: Entraîner le modèle - title: Déplacer le poisson completion: - - internal + - internal - title: Défi challenge: true completion: - - external + - external diff --git a/fr-FR/step_1.md b/fr-FR/step_1.md index 0a47b67..951a699 100644 --- a/fr-FR/step_1.md +++ b/fr-FR/step_1.md @@ -6,22 +6,21 @@ Tu auras besoin d'un **microphone**. ![Un projet Scratch avec un poisson-clown et un beignet dans une scène sous-marine.](images/whatyouwillmake.png) ---- collapse --- +\--- collapse --- --- -title: Où sont stockées mes commandes vocales ? ---- + +## title: Où sont stockées mes commandes vocales ? - Ce projet utilise une technologie appelée « apprentissage automatique ». Les systèmes d'apprentissage automatique sont entraînés à l'aide d'une grande quantité de données. - Ce projet ne nécessite pas la création d'un compte ou d'une connexion. Pour ce projet, les exemples que tu utilises pour réaliser le modèle ne sont stockés que temporairement dans ton navigateur (uniquement sur ta machine). ---- /collapse --- +\--- /collapse --- ---- collapse --- ---- -title: Pas de YouTube ? Télécharge les vidéos ! ---- +## --- collapse --- + +## title: Pas de YouTube ? Télécharge les vidéos ! -Tu peux [télécharger l'ensemble des vidéos de ce projet](https://rpf.io/p/fr-FR/fish-food-go){:target="_blank"}. +Tu peux [télécharger l'ensemble des vidéos de ce projet](https://rpf.io/p/en/fish-food-go){:target="_blank"}. ---- /collapse --- +\--- /collapse --- diff --git a/fr-FR/step_2.md b/fr-FR/step_2.md index da16c9a..9e7a0bf 100644 --- a/fr-FR/step_2.md +++ b/fr-FR/step_2.md @@ -6,29 +6,29 @@ ---- task --- +\--- task --- Va sur [machinelearningforkids.co.uk](https://machinelearningforkids.co.uk/#!/login){:target="_blank"} dans un navigateur web. Clique sur **Essayer maintenant**. ---- /task --- +\--- /task --- ---- task --- +\--- task --- Clique sur **Projets** dans la barre de menus en haut de la page. Clique sur le bouton **+ Ajouter un nouveau projet**. -Nomme ton projet `Nourriture pour poissons` et définis-le pour apprendre à reconnaître les **sons**, et stocker les données **dans ton navigateur web**. Puis clique sur **Créer**. +Nomme ton projet « Nourriture pour poissons » et définis-le pour apprendre à reconnaître les **sons**, et stocker les données **dans ton navigateur web**. Puis clique sur **Créer**. ![Création d'un projet](images/create-project.png) Tu devrais maintenant voir « Nourriture pour poissons » dans la liste des projets. Clique sur le projet. ![Liste de projets avec Nourriture pour poissons répertoriée.](images/projects-list.png) ---- /task --- +\--- /task --- ---- task --- +\--- task --- Clique sur le bouton **Entraîner**. ![Menu principal du projet avec une flèche pointant vers le bouton Entraîner.](images/project-train.png) @@ -37,7 +37,7 @@ Si tu vois un message contextuel te demandant d'utiliser le microphone, clique s ![Message contextuel demandant d'autoriser l'utilisation du microphone.](images/allow-microphone.png) ---- /task --- +\--- /task --- diff --git a/fr-FR/step_3.md b/fr-FR/step_3.md index e512623..bdc572f 100644 --- a/fr-FR/step_3.md +++ b/fr-FR/step_3.md @@ -8,7 +8,7 @@ Tout d'abord, tu vas collecter des échantillons de bruit de fond. Cela aidera ton modèle d’apprentissage automatique à faire la différence entre tes commandes vocales et le bruit de fond de ton environnement. ---- task --- +\--- task --- Clique sur le bouton **+ Ajouter un exemple** dans **background noise**. @@ -17,11 +17,11 @@ Clique sur le microphone mais ne dis rien pour enregistrer 2 secondes de bruit Clique sur le bouton **Ajouter** pour enregistrer ton enregistrement. ---- /task --- +\--- /task --- ---- task --- +\--- task --- Répète ces étapes jusqu'à ce que tu aies **au moins 8 exemples** de bruit de fond. ![Élément rempli de 8 exemples de bruit de fond.](images/8-background.png) ---- /task --- +\--- /task --- diff --git a/fr-FR/step_4.md b/fr-FR/step_4.md index 8c48fd6..9d3e6c8 100644 --- a/fr-FR/step_4.md +++ b/fr-FR/step_4.md @@ -8,35 +8,35 @@ Tu vas maintenant enregistrer 8 exemples pour chaque mot (« haut », « bas », « gauche » et « droite ») pour que ton modèle d’apprentissage automatique puisse apprendre à les reconnaître. ---- task --- +\--- task --- -Clique sur **+ Ajouter une nouvelle étiquette** en haut à droite de l'écran et ajoute une étiquette appelée `gauche`. +Clique sur **+ Ajouter une nouvelle étiquette** en haut à droite de l'écran et ajoute une étiquette appelée « gauche ». ---- /task --- +\--- /task --- ---- task --- +\--- task --- -Clique sur **+ Ajouter un exemple** dans la case pour la nouvelle étiquette `gauche`, et enregistre-toi en disant « gauche ». +Clique sur **+ Ajouter un exemple** dans la case pour la nouvelle étiquette « gauche », et enregistre-toi en disant « gauche ». Répète jusqu'à ce que tu aies enregistré **au moins 8 exemples**. ---- /task --- +\--- /task --- ---- task --- +\--- task --- -**+ Ajoute une nouvelle étiquette** pour créer une autre étiquette appelée `droite` et enregistre 8 exemples où tu dis « droite ». +**+ Ajoute une nouvelle étiquette** pour créer une autre étiquette appelée « droite » et enregistre 8 exemples où tu dis « droite ». ---- /task --- +\--- /task --- ---- task --- +\--- task --- -**+ Ajoute une nouvelle étiquette** pour créer une autre étiquette appelée `haut` et enregistre 8 exemples où tu dis « haut ». +**+ Ajoute une nouvelle étiquette** pour créer une autre étiquette appelée « haut » et enregistre 8 exemples où tu dis « haut ». ---- /task --- +\--- /task --- ---- task --- +\--- task --- -**+ Ajoute une nouvelle étiquette** pour créer une autre étiquette appelée `bas` et enregistre 8 exemples où tu dis « bas ». +**+ Ajoute une nouvelle étiquette** pour créer une autre étiquette appelée « bas » et enregistre 8 exemples où tu dis « bas ». ---- /task --- +\--- /task --- diff --git a/fr-FR/step_5.md b/fr-FR/step_5.md index 0b71441..b721991 100644 --- a/fr-FR/step_5.md +++ b/fr-FR/step_5.md @@ -8,7 +8,7 @@ Tu as rassemblé les exemples dont tu as besoin, tu vas maintenant utiliser ces exemples pour entraîner ton modèle d'apprentissage automatique. ---- task --- +\--- task --- Clique sur **< Revenir au projet** dans le coin supérieur gauche. @@ -17,24 +17,24 @@ Clique sur **Apprendre & Tester**. Clique sur le bouton **Entraîner un nouveau modèle d'apprentissage machine**. Cela peut prendre quelques minutes. ![Flèche pointant vers un bouton indiquant "Entraîner un nouveau modèle d'apprentissage machine".](images/train-new-model.png) ---- /task --- +\--- /task --- Une fois l'entraînement terminé, tu peux tester comment ton modèle reconnaît tes commandes vocales. ---- task --- +\--- task --- Clique sur le bouton **Commencez à écouter**, puis dis « gauche ». ---- /task --- +\--- /task --- Si ton modèle d'apprentissage automatique le reconnaît, il affichera ce qu'il te prédit. ![Flèche pointant vers le bouton Commencez à écouter.](images/test-your-model.png) ---- task --- +\--- task --- Teste si le modèle reconnaît également « haut », « bas » et « droite ». ---- /task --- +\--- /task --- Si tu n'es pas satisfait·e de la façon dont le modèle fonctionne, retourne à la page **Entraîner** et ajoute d'autres exemples, puis entraîne ton modèle à nouveau. diff --git a/fr-FR/step_6.md b/fr-FR/step_6.md index f5592d8..74004bc 100644 --- a/fr-FR/step_6.md +++ b/fr-FR/step_6.md @@ -8,7 +8,7 @@ Maintenant que ton modèle peut distinguer les mots, tu peux l’utiliser dans un programme Scratch pour déplacer un poisson sur l’écran. ---- task --- +\--- task --- Clique sur le lien **< Revenir au projet**. @@ -18,53 +18,52 @@ Clique sur **Scratch 3**. Clique sur **Ouvrir dans Scratch 3**. ---- /task --- +\--- /task --- ---- task --- +\--- task --- Clique sur **Modèles de projet** en haut et sélectionne le projet « Fish Food » pour charger le sprite poisson, auquel du code a déjà été ajouté. ---- /task --- +\--- /task --- Machine Learning for Kids a ajouté des blocs spéciaux à Scratch pour te permettre d'utiliser le modèle que tu viens d'entraîner. Trouve-les en bas de la liste des blocs. ![Une liste de nouveaux blocs créés par Machine Learning for Kids, comprenant des instructions telles que "commencez à écouter", "arrêter d'écouter" et "when I hear gauche".](images/new-blocks.png) ---- task --- +\--- task --- Avec le sprite **poisson** sélectionné, clique sur l'onglet **Code**. Trouve le bon endroit dans le code et ajoute un bloc spécial pour indiquer au modèle de commencer à écouter. ![Dans le sprite poisson, un bloc "commencer à écouter" est ajouté après le bloc "quand le drapeau est cliqué".](images/start-listening.png) ---- /task --- +\--- /task --- ---- task --- +\--- task --- Ajoute le code pour « haut » au sprite **Poisson**. ![Dans le sprite poisson, un bloc "when I hear haut" est ajouté, puis un bloc "s'orienter à 0".](images/starter-code.png) ---- /task --- +\--- /task --- ---- task --- +\--- task --- Regarde le code que tu dois utiliser pour déplacer le poisson vers le haut, puis vois si tu peux déterminer le code pour le bas, la gauche et la droite. ---- collapse --- ---- -title: Montre-moi comment ---- +## --- collapse --- + +## title: Montre-moi comment ![Trois autres paires de blocs sont ajoutées : "when I hear gauche" et "s'orienter à -90 ; "when I hear droite" et "s'orienter à 90" ; "when I hear bas" et "s'orienter à 180".](images/finished-code.png) ---- /collapse --- +\--- /collapse --- ---- /task --- +\--- /task --- ---- task --- +\--- task --- Clique sur le **drapeau vert** et dis haut, bas, gauche ou droite. Vérifie que le poisson se déplace dans la direction souhaitée. ---- /task --- +\--- /task --- diff --git a/fr-FR/step_7.md b/fr-FR/step_7.md index 6f9e1a4..5c184c8 100644 --- a/fr-FR/step_7.md +++ b/fr-FR/step_7.md @@ -1,40 +1,39 @@ ## Défi ---- challenge --- +\--- challenge --- ---- task --- +\--- task --- Ajoute une variable pour suivre le score et ajoute un point à chaque fois que le poisson mange de la nourriture. ---- collapse --- ---- -title: Montre-moi comment ---- +## --- collapse --- + +## title: Montre-moi comment Ajoute le code entouré au sprite **Nourriture**. ![Code de scratch : mettre score à 0, montrer, répéter jusqu'à ce que ordonnée y < -170, ajouter -3 à y, si touche le Poisson alors ajouter 1 à score, cacher.](images/score-hint.png) ---- /collapse --- +\--- /collapse --- ---- /task --- +\--- /task --- ---- task --- +\--- task --- Ajoute un nouveau sprite qui n'est pas de la nourriture, et déduit des points si le poisson le mange. ---- /task --- +\--- /task --- ---- task --- +\--- task --- Fais tomber les aliments à différentes vitesses aléatoires. ---- /task --- +\--- /task --- ---- task --- +\--- task --- Ou, si tu préfères, crée un jeu complètement différent qui utilise des commandes vocales pour contrôler un personnage ! ---- /task --- +\--- /task --- ---- /challenge --- +\--- /challenge --- diff --git a/fr-FR/step_8.md b/fr-FR/step_8.md index d3bf226..3194b2e 100644 --- a/fr-FR/step_8.md +++ b/fr-FR/step_8.md @@ -1,12 +1,3 @@ ## Que peux-tu faire maintenant ? -Il existe de nombreux autres projets d'apprentissage automatique et d'IA dans le parcours [Apprentissage automatique avec Scratch](https://projects.raspberrypi.org/fr-FR/pathways/scratch-machine-learning). - -*** - -Ce projet a été traduit par des bénévoles: - -Jonathan Vannieuwkerke -Michel Arnols - -Grâce aux bénévoles, nous pouvons donner aux gens du monde entier la chance d'apprendre dans leur propre langue. Vous pouvez nous aider à atteindre plus de personnes en vous portant volontaire pour la traduction - plus d'informations sur [rpf.io/translate](https://rpf.io/translate). +Il existe de nombreux autres projets d'apprentissage automatique et d'IA dans le parcours [Apprentissage automatique avec Scratch](https://projects.raspberrypi.org/en/pathways/scratch-machine-learning). diff --git a/hi-IN/images/8-background.png b/hi-IN/images/8-background.png new file mode 100644 index 0000000..d27ef20 Binary files /dev/null and b/hi-IN/images/8-background.png differ diff --git a/hi-IN/images/allow-microphone.png b/hi-IN/images/allow-microphone.png new file mode 100644 index 0000000..ea43466 Binary files /dev/null and b/hi-IN/images/allow-microphone.png differ diff --git a/hi-IN/images/banner.png b/hi-IN/images/banner.png new file mode 100644 index 0000000..0a26f21 Binary files /dev/null and b/hi-IN/images/banner.png differ diff --git a/hi-IN/images/create-project.png b/hi-IN/images/create-project.png new file mode 100644 index 0000000..1a48e1c Binary files /dev/null and b/hi-IN/images/create-project.png differ diff --git a/hi-IN/images/finished-code.png b/hi-IN/images/finished-code.png new file mode 100644 index 0000000..502be93 Binary files /dev/null and b/hi-IN/images/finished-code.png differ diff --git a/hi-IN/images/new-blocks.png b/hi-IN/images/new-blocks.png new file mode 100644 index 0000000..593366e Binary files /dev/null and b/hi-IN/images/new-blocks.png differ diff --git a/hi-IN/images/project-train.png b/hi-IN/images/project-train.png new file mode 100644 index 0000000..660489d Binary files /dev/null and b/hi-IN/images/project-train.png differ diff --git a/hi-IN/images/projects-list.png b/hi-IN/images/projects-list.png new file mode 100644 index 0000000..6938954 Binary files /dev/null and b/hi-IN/images/projects-list.png differ diff --git a/hi-IN/images/record-button.png b/hi-IN/images/record-button.png new file mode 100644 index 0000000..4db43f0 Binary files /dev/null and b/hi-IN/images/record-button.png differ diff --git a/hi-IN/images/score-hint.png b/hi-IN/images/score-hint.png new file mode 100644 index 0000000..361b149 Binary files /dev/null and b/hi-IN/images/score-hint.png differ diff --git a/hi-IN/images/start-listening.png b/hi-IN/images/start-listening.png new file mode 100644 index 0000000..517dc83 Binary files /dev/null and b/hi-IN/images/start-listening.png differ diff --git a/hi-IN/images/starter-code.png b/hi-IN/images/starter-code.png new file mode 100644 index 0000000..b9ff8c1 Binary files /dev/null and b/hi-IN/images/starter-code.png differ diff --git a/hi-IN/images/test-your-model.png b/hi-IN/images/test-your-model.png new file mode 100644 index 0000000..84c6652 Binary files /dev/null and b/hi-IN/images/test-your-model.png differ diff --git a/hi-IN/images/train-new-model.png b/hi-IN/images/train-new-model.png new file mode 100644 index 0000000..62ce33f Binary files /dev/null and b/hi-IN/images/train-new-model.png differ diff --git a/hi-IN/images/whatyouwillmake.png b/hi-IN/images/whatyouwillmake.png new file mode 100644 index 0000000..b11ec59 Binary files /dev/null and b/hi-IN/images/whatyouwillmake.png differ diff --git a/hi-IN/meta.yml b/hi-IN/meta.yml new file mode 100644 index 0000000..61aeaf7 --- /dev/null +++ b/hi-IN/meta.yml @@ -0,0 +1,22 @@ +title: Fish food +hero_image: images/banner.png +description: Control a fish using only your voice and direct it to the food +version: 1 +listed: true +copyedit: false +last_tested: "2024-06-04" +steps: + - title: What you will make + - title: Set up the project + completion: + - engaged + - title: Background noise + - title: Record the directions + - title: Train the model + - title: Move the fish + completion: + - internal + - title: Challenge + challenge: true + completion: + - external diff --git a/hi-IN/resources/NEW Fish 4.srt b/hi-IN/resources/NEW Fish 4.srt new file mode 100644 index 0000000..2f9c01e --- /dev/null +++ b/hi-IN/resources/NEW Fish 4.srt @@ -0,0 +1,24 @@ +1 +00:00:04,280 --> 00:00:09,120 +Click on 'Back to project',  +then click on 'Learn & Test'. + +2 +00:00:09,120 --> 00:00:16,520 +Train your new machine learning  +model - it might take a few minutes. + +3 +00:00:16,520 --> 00:00:19,720 +Click on 'Start listening' and then say 'left'. + +4 +00:00:19,720 --> 00:00:23,360 +Test whether the model  +recognises you saying 'left'. + +5 +00:00:23,360 --> 00:00:29,960 +Also check whether the model  +recognises 'right', 'up' and 'down'. + diff --git a/hi-IN/resources/NEW Fish 5.srt b/hi-IN/resources/NEW Fish 5.srt new file mode 100644 index 0000000..c07c129 --- /dev/null +++ b/hi-IN/resources/NEW Fish 5.srt @@ -0,0 +1,64 @@ +1 +00:00:05,480 --> 00:00:10,840 +Click on 'Back to project', then click on 'Make'. + +2 +00:00:10,840 --> 00:00:15,400 +You're going to use the model in Scratch 3. + +3 +00:00:15,400 --> 00:00:24,280 +Go to 'Project templates' and  +find the template for Fish Food. + +4 +00:00:24,280 --> 00:00:26,560 +Some code has been added for you. + +5 +00:00:26,560 --> 00:00:33,000 +Open the menu of special Machine Learning for  +Kids blocks, and drag a 'when I hear up' block. + +6 +00:00:33,000 --> 00:00:40,960 +Now add some code so that when you say  +the word 'up', the fish moves upwards. + +7 +00:00:40,960 --> 00:00:45,840 +Do the same for down. + +8 +00:00:45,840 --> 00:00:52,960 +Add some code for left and right too. + +9 +00:00:52,960 --> 00:00:55,000 +Now it's time to test the model. + +10 +00:00:55,000 --> 00:00:59,480 +Click the green flag and then say 'up' + +11 +00:00:59,480 --> 00:00:59,920 +'down' + +12 +00:01:01,680 --> 00:01:03,200 +'left' + +13 +00:01:03,200 --> 00:01:04,280 +and 'right'. + +14 +00:01:04,280 --> 00:01:06,360 +Watch your fish move! + +15 +00:01:06,360 --> 00:01:13,520 +Use your voice to move the  +fish and eat the falling food. + diff --git a/hi-IN/resources/NEW Fish food 1.srt b/hi-IN/resources/NEW Fish food 1.srt new file mode 100644 index 0000000..adbbe6f --- /dev/null +++ b/hi-IN/resources/NEW Fish food 1.srt @@ -0,0 +1,29 @@ +1 +00:00:03,760 --> 00:00:07,480 +Go to rpf.io/ml4k + +2 +00:00:07,480 --> 00:00:10,920 +Click on 'Get started', then 'Try it now'. + +3 +00:00:10,920 --> 00:00:16,560 +Add a new project, call it 'Fish food',  +and set it to learn to recognise sounds. + +4 +00:00:16,560 --> 00:00:19,440 +Store the data in your web browser. + +5 +00:00:19,440 --> 00:00:21,520 +Click on the name of the project, + +6 +00:00:21,520 --> 00:00:23,160 +then click 'Train'. + +7 +00:00:23,160 --> 00:00:29,640 +Allow microphone access if you are asked. + diff --git a/hi-IN/resources/NEW Fish food 2.srt b/hi-IN/resources/NEW Fish food 2.srt new file mode 100644 index 0000000..d275b89 --- /dev/null +++ b/hi-IN/resources/NEW Fish food 2.srt @@ -0,0 +1,12 @@ +1 +00:00:04,560 --> 00:00:06,960 +Now, add an example of background noise + +2 +00:00:06,960 --> 00:00:11,680 +- so don't say anything when you record. + +3 +00:00:11,680 --> 00:00:20,160 +You're going to need eight examples. + diff --git a/hi-IN/resources/NEW Fish food 3.srt b/hi-IN/resources/NEW Fish food 3.srt new file mode 100644 index 0000000..dfcdb51 --- /dev/null +++ b/hi-IN/resources/NEW Fish food 3.srt @@ -0,0 +1,41 @@ +1 +00:00:00,400 --> 00:00:05,080 +So now it's time to add training  +samples for your actual commands. + +2 +00:00:05,080 --> 00:00:09,840 +First, you're going to add a label called left. + +3 +00:00:09,840 --> 00:00:14,800 +Now record yourself saying the word 'left'. + +4 +00:00:14,800 --> 00:00:21,400 +Repeat this until you have  +eight different examples. + +5 +00:00:21,400 --> 00:00:27,280 +Then do the same for right. + +6 +00:00:27,280 --> 00:00:29,960 +And you're also going to add  +eight examples for that one. + +7 +00:00:34,360 --> 00:00:41,880 +And then up and down, add eight  +examples of you saying the word 'up', + +8 +00:00:41,880 --> 00:00:43,280 +and the word 'down'. + +9 +00:00:43,280 --> 00:00:54,400 +You're adding eight so there's  +enough data to train your model with. + diff --git a/hi-IN/resources/fish-food-starter.sb3 b/hi-IN/resources/fish-food-starter.sb3 new file mode 100644 index 0000000..31c1639 Binary files /dev/null and b/hi-IN/resources/fish-food-starter.sb3 differ diff --git a/hi-IN/resources/readme.txt b/hi-IN/resources/readme.txt new file mode 100644 index 0000000..0e0956c --- /dev/null +++ b/hi-IN/resources/readme.txt @@ -0,0 +1 @@ +To watch a video with subtitles on VLC (videolan.org), ensure the video file and subtitle file are in the same folder and have the exact same name (e.g., video.mp4 and video.srt). Open the video in VLC, and it will automatically load the subtitles. If the subtitles don’t appear, right-click the video screen, go to **Subtitle**, then **Add Subtitle File**, and select the correct .srt file. Enjoy watching with subtitles! \ No newline at end of file diff --git a/hi-IN/step_1.md b/hi-IN/step_1.md new file mode 100644 index 0000000..763490f --- /dev/null +++ b/hi-IN/step_1.md @@ -0,0 +1,26 @@ +## Introduction + +Train a machine learning model to recognise voice commands 'up', 'down', 'left', and 'right', and use them to control a fish in a fun game. + +You will need a **microphone**. + +![A Scratch project with a clownfish and a doughnut in an underwater scene.](images/whatyouwillmake.png) + +\--- collapse --- + +--- + +## title: Where are my voice commands stored? + +- This project uses a technology called 'machine learning'. Machine learning systems are trained using a large amount of data. +- This project does not require you to create an account or log in. For this project, the examples you use to make the model are only stored temporarily in your browser (only on your machine). + +\--- /collapse --- + +## --- collapse --- + +## title: No YouTube? Download the videos! + +You can [download all the videos for this project](https://rpf.io/p/en/fish-food-go){:target="_blank"}. + +\--- /collapse --- diff --git a/hi-IN/step_2.md b/hi-IN/step_2.md new file mode 100644 index 0000000..ff86a5e --- /dev/null +++ b/hi-IN/step_2.md @@ -0,0 +1,43 @@ +## Set up the project + + +
+

+
+ + +\--- task --- + +Go to [machinelearningforkids.co.uk](https://machinelearningforkids.co.uk/#!/login){:target="_blank"} in a web browser. + +Click on **Try it now**. + +\--- /task --- + +\--- task --- + +Click on **Projects** in the menu bar at the top. + +Click on the **+ Add a new project** button. + +Name your project `Fish food` and set it to learn to recognise **sounds**, and store data **in your web browser**. Then click on **Create**. +![Creating a project](images/create-project.png) + +You should now see 'Fish food' in the projects list. Click on the project. +![Project list with Fish food listed.](images/projects-list.png) + +\--- /task --- + +\--- task --- + +Click on the **Train** button. +![Project main menu with an arrow pointing to the Train button.](images/project-train.png) + +If you see a pop-up message asking to use the microphone, click on **Allow on every visit**. + +![Pop-up message asking to allow microphone use.](images/allow-microphone.png) + +\--- /task --- + + + diff --git a/hi-IN/step_3.md b/hi-IN/step_3.md new file mode 100644 index 0000000..f34e1a8 --- /dev/null +++ b/hi-IN/step_3.md @@ -0,0 +1,27 @@ +## Background noise + + +
+

+
+ + +First, you will collect samples of background noise. This will help your machine learning model to tell the difference between your voice commands, and the background noise where you are. + +\--- task --- + +Click the **+ Add example** button in **background noise**. + +Click on the microphone but don't say anything to record 2 seconds of background noise. +![Arrow pointing to microphone button.](images/record-button.png) + +Click the **Add** button to save your recording. + +\--- /task --- + +\--- task --- + +Repeat those steps until you have **at least 8 examples** of background noise. +![Bucket filled with 8 background noise examples.](images/8-background.png) + +\--- /task --- diff --git a/hi-IN/step_4.md b/hi-IN/step_4.md new file mode 100644 index 0000000..b9dc8e6 --- /dev/null +++ b/hi-IN/step_4.md @@ -0,0 +1,42 @@ +## Record the directions + + +
+

+
+ + +Now you will record 8 examples of each word ('up', 'down', 'left', and 'right') so that your machine learning model can learn to recognise them. + +\--- task --- + +Click on **+ Add new label** on the top right of the screen and add a label called `left`. + +\--- /task --- + +\--- task --- + +Click on **+ Add example** inside the box for the new `left` label, and record yourself saying "left". + +Repeat until you have recorded **at least 8 examples**. + +\--- /task --- + +\--- task --- + +**+ Add new label** to create another label called `right` and record 8 examples of you saying "right". + +\--- /task --- + +\--- task --- + +**+ Add new label** to create another label called `up` and record 8 examples of you saying "up". + +\--- /task --- + +\--- task --- + +**+ Add new label** to create another label called `down` and record 8 examples of you saying "down". + +\--- /task --- + diff --git a/hi-IN/step_5.md b/hi-IN/step_5.md new file mode 100644 index 0000000..e4be274 --- /dev/null +++ b/hi-IN/step_5.md @@ -0,0 +1,42 @@ +## Train the model + + +
+

+
+ + +You have gathered the examples you need, now you will use these examples to train your machine learning model. + +\--- task --- + +Click on **< Back to project** in the top left-hand corner. + +Click on **Learn & Test**. + +Click on the button labelled **Train new machine learning model**. This may take a few minutes to complete. +![Arrow pointing to a button saying 'Train new machine learning model'.](images/train-new-model.png) + +\--- /task --- + +Once the training has finished, you can test how well your model recognises your voice commands. + +\--- task --- + +Click the **Start listening** button, then say "left". + +\--- /task --- + +If your machine learning model recognises it, it will display what it predicts you said. +![Arrow pointing to the start listening button.](images/test-your-model.png) + +\--- task --- + +Test whether the model recognises "up", "down", and "right" as well. + +\--- /task --- + +If you are not happy with how the model works, go back to the **Train** page and add more examples, then train your model again. + + + diff --git a/hi-IN/step_6.md b/hi-IN/step_6.md new file mode 100644 index 0000000..0d586a4 --- /dev/null +++ b/hi-IN/step_6.md @@ -0,0 +1,71 @@ +## Move the fish + + +
+

+
+ + +Now that your model can distinguish between words, you can use it in a Scratch program to move a fish around the screen. + +\--- task --- + +Click on the **< Back to project** link. + +Click on **Make**. + +Click on **Scratch 3**. + +Click on **Open in Scratch 3**. + +\--- /task --- + +\--- task --- + +Click on **Project templates** at the top and select the 'Fish food' project to load the fish sprite, which has some code already added to it. + +\--- /task --- + +Machine Learning for Kids has added some special blocks to Scratch to allow you to use the model you just trained. Find them at the bottom of the blocks list. + +![A list of new blocks created by Machine Learning for Kids, including instructions such as 'Start listening', 'Stop listening', and 'When I hear left'.](images/new-blocks.png) + +\--- task --- + +With the **fish** sprite selected, click on the **Code** tab. Find the right place in the code and add a special block to tell the model to start listening. + +![In the fish sprite, a 'start listening' block is added after the 'when flag clicked' block.](images/start-listening.png) + +\--- /task --- + +\--- task --- + +Add the code for 'up' to the **Fish** sprite. +![In the fish sprite, a 'when I hear up' block is added, then a 'point in direction 0' block.](images/starter-code.png) + +\--- /task --- + +\--- task --- + +Look at the code you have to move the fish up, then see if you can work out the code for down, left, and right. + +## --- collapse --- + +## title: Show me how + +![Three more pairs of blocks are added: 'When I hear left' and 'point in direction -90'; 'When I hear right' and 'point in direction 90'; 'When I hear down' and 'point in direction 180'.](images/finished-code.png) + +\--- /collapse --- + +\--- /task --- + +\--- task --- + +Click the **green flag** and say up, down, left, or right. Check that the fish moves in the direction you expected. + +\--- /task --- + + + + + diff --git a/hi-IN/step_7.md b/hi-IN/step_7.md new file mode 100644 index 0000000..6e10a97 --- /dev/null +++ b/hi-IN/step_7.md @@ -0,0 +1,39 @@ +## Challenge + +\--- challenge --- + +\--- task --- + +Add a variable to keep track of the score, and add a point each time the fish eats some food. + +## --- collapse --- + +## title: Show me how + +Add the circled code to the **Food** sprite. + +![Scratch code: Set score to 0, show, repeat until y position < -170, change y by -3, if touching fish then change score by 1, hide.](images/score-hint.png) + +\--- /collapse --- + +\--- /task --- + +\--- task --- + +Add a new sprite that is not food, and deduct points if the fish eats it. + +\--- /task --- + +\--- task --- + +Make the food fall at different random speeds. + +\--- /task --- + +\--- task --- + +Or, if you prefer, make a completely different game that uses voice commands to control a character! + +\--- /task --- + +\--- /challenge --- diff --git a/hi-IN/step_8.md b/hi-IN/step_8.md new file mode 100644 index 0000000..d4b22e9 --- /dev/null +++ b/hi-IN/step_8.md @@ -0,0 +1,3 @@ +## What can you do now? + +There are lots of other machine learning and AI projects in the [Machine learning with Scratch](https://projects.raspberrypi.org/en/pathways/scratch-machine-learning) pathway. diff --git a/it-IT/images/8-background.png b/it-IT/images/8-background.png new file mode 100644 index 0000000..d27ef20 Binary files /dev/null and b/it-IT/images/8-background.png differ diff --git a/it-IT/images/allow-microphone.png b/it-IT/images/allow-microphone.png new file mode 100644 index 0000000..ea43466 Binary files /dev/null and b/it-IT/images/allow-microphone.png differ diff --git a/it-IT/images/banner.png b/it-IT/images/banner.png new file mode 100644 index 0000000..0a26f21 Binary files /dev/null and b/it-IT/images/banner.png differ diff --git a/it-IT/images/create-project.png b/it-IT/images/create-project.png new file mode 100644 index 0000000..1a48e1c Binary files /dev/null and b/it-IT/images/create-project.png differ diff --git a/it-IT/images/finished-code.png b/it-IT/images/finished-code.png new file mode 100644 index 0000000..502be93 Binary files /dev/null and b/it-IT/images/finished-code.png differ diff --git a/it-IT/images/new-blocks.png b/it-IT/images/new-blocks.png new file mode 100644 index 0000000..593366e Binary files /dev/null and b/it-IT/images/new-blocks.png differ diff --git a/it-IT/images/project-train.png b/it-IT/images/project-train.png new file mode 100644 index 0000000..660489d Binary files /dev/null and b/it-IT/images/project-train.png differ diff --git a/it-IT/images/projects-list.png b/it-IT/images/projects-list.png new file mode 100644 index 0000000..6938954 Binary files /dev/null and b/it-IT/images/projects-list.png differ diff --git a/it-IT/images/record-button.png b/it-IT/images/record-button.png new file mode 100644 index 0000000..4db43f0 Binary files /dev/null and b/it-IT/images/record-button.png differ diff --git a/it-IT/images/score-hint.png b/it-IT/images/score-hint.png new file mode 100644 index 0000000..361b149 Binary files /dev/null and b/it-IT/images/score-hint.png differ diff --git a/it-IT/images/start-listening.png b/it-IT/images/start-listening.png new file mode 100644 index 0000000..517dc83 Binary files /dev/null and b/it-IT/images/start-listening.png differ diff --git a/it-IT/images/starter-code.png b/it-IT/images/starter-code.png new file mode 100644 index 0000000..b9ff8c1 Binary files /dev/null and b/it-IT/images/starter-code.png differ diff --git a/it-IT/images/test-your-model.png b/it-IT/images/test-your-model.png new file mode 100644 index 0000000..84c6652 Binary files /dev/null and b/it-IT/images/test-your-model.png differ diff --git a/it-IT/images/train-new-model.png b/it-IT/images/train-new-model.png new file mode 100644 index 0000000..62ce33f Binary files /dev/null and b/it-IT/images/train-new-model.png differ diff --git a/it-IT/images/whatyouwillmake.png b/it-IT/images/whatyouwillmake.png new file mode 100644 index 0000000..b11ec59 Binary files /dev/null and b/it-IT/images/whatyouwillmake.png differ diff --git a/it-IT/meta.yml b/it-IT/meta.yml new file mode 100644 index 0000000..61aeaf7 --- /dev/null +++ b/it-IT/meta.yml @@ -0,0 +1,22 @@ +title: Fish food +hero_image: images/banner.png +description: Control a fish using only your voice and direct it to the food +version: 1 +listed: true +copyedit: false +last_tested: "2024-06-04" +steps: + - title: What you will make + - title: Set up the project + completion: + - engaged + - title: Background noise + - title: Record the directions + - title: Train the model + - title: Move the fish + completion: + - internal + - title: Challenge + challenge: true + completion: + - external diff --git a/it-IT/resources/NEW Fish 4.srt b/it-IT/resources/NEW Fish 4.srt new file mode 100644 index 0000000..2da152f --- /dev/null +++ b/it-IT/resources/NEW Fish 4.srt @@ -0,0 +1,24 @@ +1 +00:00:04,280 --> 00:00:09,120 +Fare clic su "Torna al progetto", +quindi fare clicca su "Impara e testa". + +2 +00:00:09,120 --> 00:00:16,520 +Addestra il tuo nuovo modello di apprendimento automatico + - potrebbero volerci alcuni minuti. + +3 +00:00:16,520 --> 00:00:19,720 +Fare clic su 'Inizia ad ascoltare' e poi dire 'sinistra'. + +4 +00:00:19,720 --> 00:00:23,360 +Verifica se il modello +riconosce che dici "sinistra". + +5 +00:00:23,360 --> 00:00:29,960 +Controllare anche se il modello +riconosce 'destra', 'su' e 'giù'. + diff --git a/it-IT/resources/NEW Fish 5.srt b/it-IT/resources/NEW Fish 5.srt new file mode 100644 index 0000000..c07c129 --- /dev/null +++ b/it-IT/resources/NEW Fish 5.srt @@ -0,0 +1,64 @@ +1 +00:00:05,480 --> 00:00:10,840 +Click on 'Back to project', then click on 'Make'. + +2 +00:00:10,840 --> 00:00:15,400 +You're going to use the model in Scratch 3. + +3 +00:00:15,400 --> 00:00:24,280 +Go to 'Project templates' and  +find the template for Fish Food. + +4 +00:00:24,280 --> 00:00:26,560 +Some code has been added for you. + +5 +00:00:26,560 --> 00:00:33,000 +Open the menu of special Machine Learning for  +Kids blocks, and drag a 'when I hear up' block. + +6 +00:00:33,000 --> 00:00:40,960 +Now add some code so that when you say  +the word 'up', the fish moves upwards. + +7 +00:00:40,960 --> 00:00:45,840 +Do the same for down. + +8 +00:00:45,840 --> 00:00:52,960 +Add some code for left and right too. + +9 +00:00:52,960 --> 00:00:55,000 +Now it's time to test the model. + +10 +00:00:55,000 --> 00:00:59,480 +Click the green flag and then say 'up' + +11 +00:00:59,480 --> 00:00:59,920 +'down' + +12 +00:01:01,680 --> 00:01:03,200 +'left' + +13 +00:01:03,200 --> 00:01:04,280 +and 'right'. + +14 +00:01:04,280 --> 00:01:06,360 +Watch your fish move! + +15 +00:01:06,360 --> 00:01:13,520 +Use your voice to move the  +fish and eat the falling food. + diff --git a/it-IT/resources/NEW Fish food 1.srt b/it-IT/resources/NEW Fish food 1.srt new file mode 100644 index 0000000..adbbe6f --- /dev/null +++ b/it-IT/resources/NEW Fish food 1.srt @@ -0,0 +1,29 @@ +1 +00:00:03,760 --> 00:00:07,480 +Go to rpf.io/ml4k + +2 +00:00:07,480 --> 00:00:10,920 +Click on 'Get started', then 'Try it now'. + +3 +00:00:10,920 --> 00:00:16,560 +Add a new project, call it 'Fish food',  +and set it to learn to recognise sounds. + +4 +00:00:16,560 --> 00:00:19,440 +Store the data in your web browser. + +5 +00:00:19,440 --> 00:00:21,520 +Click on the name of the project, + +6 +00:00:21,520 --> 00:00:23,160 +then click 'Train'. + +7 +00:00:23,160 --> 00:00:29,640 +Allow microphone access if you are asked. + diff --git a/it-IT/resources/NEW Fish food 2.srt b/it-IT/resources/NEW Fish food 2.srt new file mode 100644 index 0000000..d275b89 --- /dev/null +++ b/it-IT/resources/NEW Fish food 2.srt @@ -0,0 +1,12 @@ +1 +00:00:04,560 --> 00:00:06,960 +Now, add an example of background noise + +2 +00:00:06,960 --> 00:00:11,680 +- so don't say anything when you record. + +3 +00:00:11,680 --> 00:00:20,160 +You're going to need eight examples. + diff --git a/it-IT/resources/NEW Fish food 3.srt b/it-IT/resources/NEW Fish food 3.srt new file mode 100644 index 0000000..dfcdb51 --- /dev/null +++ b/it-IT/resources/NEW Fish food 3.srt @@ -0,0 +1,41 @@ +1 +00:00:00,400 --> 00:00:05,080 +So now it's time to add training  +samples for your actual commands. + +2 +00:00:05,080 --> 00:00:09,840 +First, you're going to add a label called left. + +3 +00:00:09,840 --> 00:00:14,800 +Now record yourself saying the word 'left'. + +4 +00:00:14,800 --> 00:00:21,400 +Repeat this until you have  +eight different examples. + +5 +00:00:21,400 --> 00:00:27,280 +Then do the same for right. + +6 +00:00:27,280 --> 00:00:29,960 +And you're also going to add  +eight examples for that one. + +7 +00:00:34,360 --> 00:00:41,880 +And then up and down, add eight  +examples of you saying the word 'up', + +8 +00:00:41,880 --> 00:00:43,280 +and the word 'down'. + +9 +00:00:43,280 --> 00:00:54,400 +You're adding eight so there's  +enough data to train your model with. + diff --git a/it-IT/resources/fish-food-starter.sb3 b/it-IT/resources/fish-food-starter.sb3 new file mode 100644 index 0000000..31c1639 Binary files /dev/null and b/it-IT/resources/fish-food-starter.sb3 differ diff --git a/it-IT/resources/readme.txt b/it-IT/resources/readme.txt new file mode 100644 index 0000000..0e0956c --- /dev/null +++ b/it-IT/resources/readme.txt @@ -0,0 +1 @@ +To watch a video with subtitles on VLC (videolan.org), ensure the video file and subtitle file are in the same folder and have the exact same name (e.g., video.mp4 and video.srt). Open the video in VLC, and it will automatically load the subtitles. If the subtitles don’t appear, right-click the video screen, go to **Subtitle**, then **Add Subtitle File**, and select the correct .srt file. Enjoy watching with subtitles! \ No newline at end of file diff --git a/it-IT/step_1.md b/it-IT/step_1.md new file mode 100644 index 0000000..763490f --- /dev/null +++ b/it-IT/step_1.md @@ -0,0 +1,26 @@ +## Introduction + +Train a machine learning model to recognise voice commands 'up', 'down', 'left', and 'right', and use them to control a fish in a fun game. + +You will need a **microphone**. + +![A Scratch project with a clownfish and a doughnut in an underwater scene.](images/whatyouwillmake.png) + +\--- collapse --- + +--- + +## title: Where are my voice commands stored? + +- This project uses a technology called 'machine learning'. Machine learning systems are trained using a large amount of data. +- This project does not require you to create an account or log in. For this project, the examples you use to make the model are only stored temporarily in your browser (only on your machine). + +\--- /collapse --- + +## --- collapse --- + +## title: No YouTube? Download the videos! + +You can [download all the videos for this project](https://rpf.io/p/en/fish-food-go){:target="_blank"}. + +\--- /collapse --- diff --git a/it-IT/step_2.md b/it-IT/step_2.md new file mode 100644 index 0000000..ff86a5e --- /dev/null +++ b/it-IT/step_2.md @@ -0,0 +1,43 @@ +## Set up the project + + +
+

+
+ + +\--- task --- + +Go to [machinelearningforkids.co.uk](https://machinelearningforkids.co.uk/#!/login){:target="_blank"} in a web browser. + +Click on **Try it now**. + +\--- /task --- + +\--- task --- + +Click on **Projects** in the menu bar at the top. + +Click on the **+ Add a new project** button. + +Name your project `Fish food` and set it to learn to recognise **sounds**, and store data **in your web browser**. Then click on **Create**. +![Creating a project](images/create-project.png) + +You should now see 'Fish food' in the projects list. Click on the project. +![Project list with Fish food listed.](images/projects-list.png) + +\--- /task --- + +\--- task --- + +Click on the **Train** button. +![Project main menu with an arrow pointing to the Train button.](images/project-train.png) + +If you see a pop-up message asking to use the microphone, click on **Allow on every visit**. + +![Pop-up message asking to allow microphone use.](images/allow-microphone.png) + +\--- /task --- + + + diff --git a/it-IT/step_3.md b/it-IT/step_3.md new file mode 100644 index 0000000..f34e1a8 --- /dev/null +++ b/it-IT/step_3.md @@ -0,0 +1,27 @@ +## Background noise + + +
+

+
+ + +First, you will collect samples of background noise. This will help your machine learning model to tell the difference between your voice commands, and the background noise where you are. + +\--- task --- + +Click the **+ Add example** button in **background noise**. + +Click on the microphone but don't say anything to record 2 seconds of background noise. +![Arrow pointing to microphone button.](images/record-button.png) + +Click the **Add** button to save your recording. + +\--- /task --- + +\--- task --- + +Repeat those steps until you have **at least 8 examples** of background noise. +![Bucket filled with 8 background noise examples.](images/8-background.png) + +\--- /task --- diff --git a/it-IT/step_4.md b/it-IT/step_4.md new file mode 100644 index 0000000..b9dc8e6 --- /dev/null +++ b/it-IT/step_4.md @@ -0,0 +1,42 @@ +## Record the directions + + +
+

+
+ + +Now you will record 8 examples of each word ('up', 'down', 'left', and 'right') so that your machine learning model can learn to recognise them. + +\--- task --- + +Click on **+ Add new label** on the top right of the screen and add a label called `left`. + +\--- /task --- + +\--- task --- + +Click on **+ Add example** inside the box for the new `left` label, and record yourself saying "left". + +Repeat until you have recorded **at least 8 examples**. + +\--- /task --- + +\--- task --- + +**+ Add new label** to create another label called `right` and record 8 examples of you saying "right". + +\--- /task --- + +\--- task --- + +**+ Add new label** to create another label called `up` and record 8 examples of you saying "up". + +\--- /task --- + +\--- task --- + +**+ Add new label** to create another label called `down` and record 8 examples of you saying "down". + +\--- /task --- + diff --git a/it-IT/step_5.md b/it-IT/step_5.md new file mode 100644 index 0000000..e4be274 --- /dev/null +++ b/it-IT/step_5.md @@ -0,0 +1,42 @@ +## Train the model + + +
+

+
+ + +You have gathered the examples you need, now you will use these examples to train your machine learning model. + +\--- task --- + +Click on **< Back to project** in the top left-hand corner. + +Click on **Learn & Test**. + +Click on the button labelled **Train new machine learning model**. This may take a few minutes to complete. +![Arrow pointing to a button saying 'Train new machine learning model'.](images/train-new-model.png) + +\--- /task --- + +Once the training has finished, you can test how well your model recognises your voice commands. + +\--- task --- + +Click the **Start listening** button, then say "left". + +\--- /task --- + +If your machine learning model recognises it, it will display what it predicts you said. +![Arrow pointing to the start listening button.](images/test-your-model.png) + +\--- task --- + +Test whether the model recognises "up", "down", and "right" as well. + +\--- /task --- + +If you are not happy with how the model works, go back to the **Train** page and add more examples, then train your model again. + + + diff --git a/it-IT/step_6.md b/it-IT/step_6.md new file mode 100644 index 0000000..0d586a4 --- /dev/null +++ b/it-IT/step_6.md @@ -0,0 +1,71 @@ +## Move the fish + + +
+

+
+ + +Now that your model can distinguish between words, you can use it in a Scratch program to move a fish around the screen. + +\--- task --- + +Click on the **< Back to project** link. + +Click on **Make**. + +Click on **Scratch 3**. + +Click on **Open in Scratch 3**. + +\--- /task --- + +\--- task --- + +Click on **Project templates** at the top and select the 'Fish food' project to load the fish sprite, which has some code already added to it. + +\--- /task --- + +Machine Learning for Kids has added some special blocks to Scratch to allow you to use the model you just trained. Find them at the bottom of the blocks list. + +![A list of new blocks created by Machine Learning for Kids, including instructions such as 'Start listening', 'Stop listening', and 'When I hear left'.](images/new-blocks.png) + +\--- task --- + +With the **fish** sprite selected, click on the **Code** tab. Find the right place in the code and add a special block to tell the model to start listening. + +![In the fish sprite, a 'start listening' block is added after the 'when flag clicked' block.](images/start-listening.png) + +\--- /task --- + +\--- task --- + +Add the code for 'up' to the **Fish** sprite. +![In the fish sprite, a 'when I hear up' block is added, then a 'point in direction 0' block.](images/starter-code.png) + +\--- /task --- + +\--- task --- + +Look at the code you have to move the fish up, then see if you can work out the code for down, left, and right. + +## --- collapse --- + +## title: Show me how + +![Three more pairs of blocks are added: 'When I hear left' and 'point in direction -90'; 'When I hear right' and 'point in direction 90'; 'When I hear down' and 'point in direction 180'.](images/finished-code.png) + +\--- /collapse --- + +\--- /task --- + +\--- task --- + +Click the **green flag** and say up, down, left, or right. Check that the fish moves in the direction you expected. + +\--- /task --- + + + + + diff --git a/it-IT/step_7.md b/it-IT/step_7.md new file mode 100644 index 0000000..6e10a97 --- /dev/null +++ b/it-IT/step_7.md @@ -0,0 +1,39 @@ +## Challenge + +\--- challenge --- + +\--- task --- + +Add a variable to keep track of the score, and add a point each time the fish eats some food. + +## --- collapse --- + +## title: Show me how + +Add the circled code to the **Food** sprite. + +![Scratch code: Set score to 0, show, repeat until y position < -170, change y by -3, if touching fish then change score by 1, hide.](images/score-hint.png) + +\--- /collapse --- + +\--- /task --- + +\--- task --- + +Add a new sprite that is not food, and deduct points if the fish eats it. + +\--- /task --- + +\--- task --- + +Make the food fall at different random speeds. + +\--- /task --- + +\--- task --- + +Or, if you prefer, make a completely different game that uses voice commands to control a character! + +\--- /task --- + +\--- /challenge --- diff --git a/it-IT/step_8.md b/it-IT/step_8.md new file mode 100644 index 0000000..d4b22e9 --- /dev/null +++ b/it-IT/step_8.md @@ -0,0 +1,3 @@ +## What can you do now? + +There are lots of other machine learning and AI projects in the [Machine learning with Scratch](https://projects.raspberrypi.org/en/pathways/scratch-machine-learning) pathway. diff --git a/nl-NL/images/allow-microphone.png b/nl-NL/images/allow-microphone.png index 79318cc..dadb4ef 100644 Binary files a/nl-NL/images/allow-microphone.png and b/nl-NL/images/allow-microphone.png differ diff --git a/nl-NL/meta.yml b/nl-NL/meta.yml index 4ae3290..8839c06 100644 --- a/nl-NL/meta.yml +++ b/nl-NL/meta.yml @@ -1,4 +1,3 @@ ---- title: Visvoer hero_image: images/banner.png description: Bestuur een vis met je stem en leid hem naar het voedsel @@ -10,14 +9,14 @@ steps: - title: Wat ga je maken - title: Het project opzetten completion: - - engaged + - engaged - title: Achtergrondgeluid - title: Neem de aanwijzingen op - title: Train het model - title: Verplaats de vis completion: - - internal + - internal - title: Uitdaging challenge: true completion: - - external + - external diff --git a/nl-NL/step_1.md b/nl-NL/step_1.md index 1cffab3..599dddc 100644 --- a/nl-NL/step_1.md +++ b/nl-NL/step_1.md @@ -6,22 +6,21 @@ Je hebt een **microfoon** nodig. ![Een Scratch-project met een clownvis en een donut in een onderwaterscène.](images/whatyouwillmake.png) ---- collapse --- +\--- collapse --- --- -title: Waar worden mijn gesproken instructies opgeslagen? ---- + +## title: Waar worden mijn gesproken instructies opgeslagen? - Dit project maakt gebruik van een technologie genaamd 'machine learning'. Machine learning-systemen worden getraind met behulp van een grote hoeveelheid data. - Voor dit project hoef je geen account aan te maken of in te loggen. Voor dit project worden de voorbeelden die je gebruikt om het model te maken tijdelijk opgeslagen in je browser (alleen op je machine). ---- /collapse --- +\--- /collapse --- ---- collapse --- ---- -title: Geen YouTube? Download de video's! ---- +## --- collapse --- + +## title: Geen YouTube? Download de video's! -Je kunt [alle video's voor dit project downloaden](https://rpf.io/p/nl-NL/fish-food-go){:target="_blank"}. +Je kunt [alle video's voor dit project downloaden](https://rpf.io/p/en/fish-food-go){:target="_blank"}. ---- /collapse --- +\--- /collapse --- diff --git a/nl-NL/step_2.md b/nl-NL/step_2.md index f6449cc..d0ac95c 100644 --- a/nl-NL/step_2.md +++ b/nl-NL/step_2.md @@ -6,15 +6,15 @@ ---- task --- +\--- task --- Ga naar [machinelearningforkids.co.uk](https://machinelearningforkids.co.uk/#!/login){:target="_blank"} in een webbrowser. Klik op **Probeer nu**. ---- /task --- +\--- /task --- ---- task --- +\--- task --- Klik op **Projecten** in de menubalk bovenaan. @@ -26,9 +26,9 @@ Geef je project de naam 'Visvoer' en geef het de opdracht om **geluiden (sounds) Je zou nu 'Visvoer' moeten zien in de projectenlijst. Klik op dit project. ![Projectlijst met visvoer vermeld.](images/projects-list.png) ---- /task --- +\--- /task --- ---- task --- +\--- task --- Klik op de knop **Train**. ![Project hoofdmenu met een pijl naar de Train button.](images/project-train.png) @@ -37,7 +37,7 @@ Als je een pop-upbericht ziet met de vraag om de microfoon te gebruiken, klik da ![Pop-upbericht met de vraag om het gebruik van de microfoon toe te staan.](images/allow-microphone.png) ---- /task --- +\--- /task --- diff --git a/nl-NL/step_3.md b/nl-NL/step_3.md index 0178af5..66e16b8 100644 --- a/nl-NL/step_3.md +++ b/nl-NL/step_3.md @@ -8,7 +8,7 @@ Eerst verzamel je voorbeelden van achtergrondgeluiden. Dit zal je machine learning model helpen om het verschil te weten tussen je spraakcommando's en het achtergrondgeluid van de plek waar je bent. ---- task --- +\--- task --- Klik op de **+ voeg een voorbeeld toe** knop in **background noise**. @@ -17,11 +17,11 @@ Klik op de microfoon, maar zeg niets om 2 seconden achtergrondgeluid op te nemen Klik op de knop **VOEG TOE** om jouw opname op te slaan. ---- /task --- +\--- /task --- ---- task --- +\--- task --- Herhaal deze stappen totdat je **minstens 8 voorbeelden** van achtergrondgeluiden hebt. ![Container gevuld met 8 achtergrondgeluidsvoorbeelden.](images/8-background.png) ---- /task --- +\--- /task --- diff --git a/nl-NL/step_4.md b/nl-NL/step_4.md index 212815d..64f377d 100644 --- a/nl-NL/step_4.md +++ b/nl-NL/step_4.md @@ -8,35 +8,35 @@ Nu ga je 8 voorbeelden van elk woord vastleggen ('omhoog', 'omlaag', 'links' en 'rechts'), zodat jouw machine learning-model deze kan leren herkennen. ---- task --- +\--- task --- Klik rechtsboven in het scherm op **+ Voeg een nieuw label toe** en voeg een label toe met de naam `links`. ---- /task --- +\--- /task --- ---- task --- +\--- task --- -Klik op **+ Voeg een voorbeeld toe** in het vak voor het nieuwe `links` label en neem jezelf op terwijl je "links" zegt. +Klik op **+ Voeg een voorbeeld toe** in het vak voor het nieuwe 'links' label en neem jezelf op terwijl je "links" zegt. Herhaal dit totdat je **minstens 8 voorbeelden** hebt opgenomen. ---- /task --- +\--- /task --- ---- task --- +\--- task --- **+ Voeg een nieuw label toe** om een ander label te maken met de naam `rechts` en neem 8 voorbeelden op waarin jij "rechts" zegt. ---- /task --- +\--- /task --- ---- task --- +\--- task --- **+ Voeg nieuw label toe** om nog een label te maken met de naam `omhoog` en neem 8 voorbeelden op waarin je "omhoog" zegt. ---- /task --- +\--- /task --- ---- task --- +\--- task --- -**+ Voeg nieuw label toe** om nog een label te maken met de naam `omlaag` en neem 8 voorbeelden op waarin je "omlaag" zegt. +**+ Voeg nieuw label toe** om nog een label te maken met de naam 'omlaag' en neem 8 voorbeelden op waarin je "omlaag" zegt. ---- /task --- +\--- /task --- diff --git a/nl-NL/step_5.md b/nl-NL/step_5.md index 5d217b3..cfdebbc 100644 --- a/nl-NL/step_5.md +++ b/nl-NL/step_5.md @@ -8,7 +8,7 @@ Je hebt de voorbeelden verzameld die je nodig hebt, nu ga je deze gebruiken om jouw machine learning-model te trainen. ---- task --- +\--- task --- Klik op ** 00:00:09,120 +Clica em "Voltar para o projeto", +depois clica em "Aprender & Testar". + +2 +00:00:09,120 --> 00:00:16,520 +Treina o teu novo modelo de +machine learning - pode demorar alguns minutos. + +3 +00:00:16,520 --> 00:00:19,720 +Clica em "Começar a ouvir" e depois diz "esquerda". + +4 +00:00:19,720 --> 00:00:23,360 +Testa se o modelo te +reconhece a dizer "esquerda". + +5 +00:00:23,360 --> 00:00:29,960 +Verifica também se o modelo +reconhece "direita", "para cima" e "para baixo". + diff --git a/pt-PT/resources/NEW Fish 5.srt b/pt-PT/resources/NEW Fish 5.srt new file mode 100644 index 0000000..0ec7108 --- /dev/null +++ b/pt-PT/resources/NEW Fish 5.srt @@ -0,0 +1,64 @@ +1 +00:00:05,480 --> 00:00:10,840 +Clica em "Voltar para o projeto", depois clica em "Criar". + +2 +00:00:10,840 --> 00:00:15,400 +Vais usar o modelo no Scratch 3. + +3 +00:00:15,400 --> 00:00:24,280 +Vai para "Project templates" e +encontra o template de Fish Food. + +4 +00:00:24,280 --> 00:00:26,560 +Algum código foi adicionado para ti. + +5 +00:00:26,560 --> 00:00:33,000 +Abre o menu de blocos especiais do Machine Learning +for Kids, e arrasta o bloco "when I hear para_cima". + +6 +00:00:33,000 --> 00:00:40,960 +Agora adiciona algum código para que quando disseres +as palavras "para cima", o peixe se mova para cima. + +7 +00:00:40,960 --> 00:00:45,840 +Faz o mesmo para baixo. + +8 +00:00:45,840 --> 00:00:52,960 +Adiciona algum código para a esquerda e para a direita também. + +9 +00:00:52,960 --> 00:00:55,000 +Agora é altura de testar o modelo. + +10 +00:00:55,000 --> 00:00:59,480 +Clica na bandeira verde e diz "para cima" + +11 +00:00:59,480 --> 00:00:59,920 +"para baixo" + +12 +00:01:01,680 --> 00:01:03,200 +"esquerda" + +13 +00:01:03,200 --> 00:01:04,280 +e "direita". + +14 +00:01:04,280 --> 00:01:06,360 +Vê como o teu peixe se move! + +15 +00:01:06,360 --> 00:01:13,520 +Usa a tua voz para moveres o +peixe e comer a comida que vai caindo. + diff --git a/pt-PT/resources/NEW Fish food 1.srt b/pt-PT/resources/NEW Fish food 1.srt new file mode 100644 index 0000000..075acbe --- /dev/null +++ b/pt-PT/resources/NEW Fish food 1.srt @@ -0,0 +1,29 @@ +1 +00:00:03,760 --> 00:00:07,480 +Vai a rpf.io/ml4k + +2 +00:00:07,480 --> 00:00:10,920 +Clica em "Começar", e depois "Experimenta agora". + +3 +00:00:10,920 --> 00:00:16,560 +Adiciona um novo projeto, chama-o "Comida para peixe", +e configura-o para aprender a reconhecer sons. + +4 +00:00:16,560 --> 00:00:19,440 +Armazena os dados no teu navegador web. + +5 +00:00:19,440 --> 00:00:21,520 +Clica no nome do projeto, + +6 +00:00:21,520 --> 00:00:23,160 +depois clica em "Treinar". + +7 +00:00:23,160 --> 00:00:29,640 +Permite o acesso ao microfone se for solicitado. + diff --git a/pt-PT/resources/NEW Fish food 2.srt b/pt-PT/resources/NEW Fish food 2.srt new file mode 100644 index 0000000..1e91467 --- /dev/null +++ b/pt-PT/resources/NEW Fish food 2.srt @@ -0,0 +1,12 @@ +1 +00:00:04,560 --> 00:00:06,960 +Agora, adiciona um exemplo de ruído de fundo + +2 +00:00:06,960 --> 00:00:11,680 +- por isso não digas nada quando estiveres a gravar. + +3 +00:00:11,680 --> 00:00:20,160 +Vais precisar de oito exemplos. + diff --git a/pt-PT/resources/NEW Fish food 3.srt b/pt-PT/resources/NEW Fish food 3.srt new file mode 100644 index 0000000..d071034 --- /dev/null +++ b/pt-PT/resources/NEW Fish food 3.srt @@ -0,0 +1,41 @@ +1 +00:00:00,400 --> 00:00:05,080 +Agora é altura de adicionar amostras +de treino para os teus comandos. + +2 +00:00:05,080 --> 00:00:09,840 +Primeiro, vais adicionar um rótulo chamado esquerda. + +3 +00:00:09,840 --> 00:00:14,800 +Agora grava-te a dizer a palavra "esquerda". + +4 +00:00:14,800 --> 00:00:21,400 +Repete isto até teres +oito exemplos diferentes. + +5 +00:00:21,400 --> 00:00:27,280 +Depois faz o mesmo para a direita. + +6 +00:00:27,280 --> 00:00:29,960 +E vais também adicionar +oito exemplos para este. + +7 +00:00:34,360 --> 00:00:41,880 +E depois para cima e para baixo, adiciona oito +exemplos teus a dizer as palavras "para cima", + +8 +00:00:41,880 --> 00:00:43,280 +e as palavras "para baixo". + +9 +00:00:43,280 --> 00:00:54,400 +Estás a adicionar oito, portanto há +dados suficientes para treinar o teu modelo. + diff --git a/pt-PT/resources/fish-food-starter.sb3 b/pt-PT/resources/fish-food-starter.sb3 new file mode 100644 index 0000000..3245a67 Binary files /dev/null and b/pt-PT/resources/fish-food-starter.sb3 differ diff --git a/pt-PT/resources/readme.txt b/pt-PT/resources/readme.txt new file mode 100644 index 0000000..6da75e7 --- /dev/null +++ b/pt-PT/resources/readme.txt @@ -0,0 +1 @@ +Para assistir a um vídeo com legendas no VLC (videolan.org), certifica-te de que o arquivo do vídeo e o ficheiro das legendas está na mesma pasta e que tenham o mesmo nome (por ex.: video.mp4 e video.srt). Abre o vídeo no VLC, que ele vai carregar automaticamente as legendas. Se as legendas não aparecerem, clica com o lado direito do rato na tela do vídeo, vai a **Legendas**, depois **Adicionar Ficheiro de Legenda**, e seleciona o ficheiro .srt correto. Diverte-te a assistir com legendas! \ No newline at end of file diff --git a/pt-PT/step_1.md b/pt-PT/step_1.md new file mode 100644 index 0000000..a5a7b3d --- /dev/null +++ b/pt-PT/step_1.md @@ -0,0 +1,26 @@ +## Introdução + +Treina o modelo de machine learning para reconhecer os comandos de voz "para cima", "para baixo", "esquerda" e "direita", e utiliza-os para controlar o peixe num jogo divertido. + +Vais precisar de um **microfone**. + +![Um projeto Scratch com um peixe-palhaço e um donut numa cena subaquática.](images/whatyouwillmake.png) + +\--- collapse --- + +--- + +## title: Onde ficam guardados os meus comandos de voz? + +- Este projeto usa uma tecnologia chamada "machine learning". Os sistemas de machine learning são treinados com uma grande quantidade de dados. +- Este projeto não exige que cries uma conta ou faças login. Para este projeto, os exemplos que usas para fazer o modelo são armazenados temporariamente no teu navegador (apenas na tua máquina). + +\--- /collapse --- + +## --- collapse --- + +## title: Não tens Youtube? Descarrega estes vídeos! + +Podes [descarregar todos os vídeos para este projeto](https://rpf.io/p/en/fish-food-go){:target="_blank"}. + +\--- /collapse --- diff --git a/pt-PT/step_2.md b/pt-PT/step_2.md new file mode 100644 index 0000000..b117fdd --- /dev/null +++ b/pt-PT/step_2.md @@ -0,0 +1,43 @@ +## Cria o teu projeto + + +
+

+
+ + +\--- task --- + +Vai para [machinelearningforkids.co.uk](https://machinelearningforkids.co.uk/#!/login){:target="_blank"} num navegador web. + +Clica em **Experimenta agora**. + +\--- /task --- + +\--- task --- + +Clica em **Projetos** na barra do menu na parte superior. + +Clica no botão **+ Adicionar um novo projeto**. + +Dá nome ao teu projeto `Comida para peixe` e configura-o para aprender a reconhecer **sons** e armazenar dados **no teu navegador web**. E clica em **Criar**. +![Criar um projeto](images/create-project.png) + +Deves ver agora "Comida para peixe" na lista de projetos. Clica em cima do projeto. +![Lista de projetos com Comida para peixe listada.](images/projects-list.png) + +\--- /task --- + +\--- task --- + +Clica no botão **Treinar**. +![Menu principal do projeto com uma seta a apontar para o botão Treinar.](images/project-train.png) + +Se vires uma mensagem pop-up a solicitar a utilização do microfone, clica em **Permitir em todas as visitas**. + +![Mensagem pop-up a solicitar permissão para utilização do microfone.](images/allow-microphone.png) + +\--- /task --- + + + diff --git a/pt-PT/step_3.md b/pt-PT/step_3.md new file mode 100644 index 0000000..aecd379 --- /dev/null +++ b/pt-PT/step_3.md @@ -0,0 +1,27 @@ +## Ruído de fundo + + +
+

+
+ + +Primeiro, vais recolher amostras do ruído de fundo. Isto vai ajudar o teu modelo de machine learning a distinguir os teus comandos de voz, do ruído de fundo do local onde estás. + +\--- task --- + +Clica no botão **+ Adicionar exemplos** em **background noise**. + +Clica no microfone, mas não digas nada, para gravar 2 segundos de ruído de fundo. +![Seta que aponta para o botão do microfone.](images/record-button.png) + +Clica no botão **Adicionar** para guardar a tua gravação. + +\--- /task --- + +\--- task --- + +Repete estes passos até teres **pelo menos 8 exemplos** de ruído de fundo. +![Balde preenchido com 8 exemplos de ruído de fundo.](images/8-background.png) + +\--- /task --- diff --git a/pt-PT/step_4.md b/pt-PT/step_4.md new file mode 100644 index 0000000..6e41004 --- /dev/null +++ b/pt-PT/step_4.md @@ -0,0 +1,42 @@ +## Grava as direções + + +
+

+
+ + +Agora vais registar 8 exemplos de cada palavra ("para cima", "para baixo", "esquerda" e "direita") para que o teu modelo de machine learning possa aprender a reconhecê-las. + +\--- task --- + +Clica em **+ Adicionar um novo rótulo** no canto superior direito do ecrã e adiciona um rótulo chamado `esquerda`. + +\--- /task --- + +\--- task --- + +Clica em **+ Adicionar exemplos** dentro da caixa para o novo rótulo `esquerda`, e grava-te a dizer "esquerda". + +Repete até teres registado **pelo menos 8 exemplos**. + +\--- /task --- + +\--- task --- + +**+ Adiciona um novo rótulo** para criar outro rótulo chamado `direita` e regista 8 exemplos teus a dizer "direita". + +\--- /task --- + +\--- task --- + +**+ Adicionar um novo rótulo** para criar outro rótulo chamado `para cima` e gravar 8 exemplos teus a dizer "para cima". + +\--- /task --- + +\--- task --- + +**+ Adicionar um novo rótulo** para criar outro rótulo chamado `para baixo` e gravar 8 exemplos teus a dizer "para baixo". + +\--- /task --- + diff --git a/pt-PT/step_5.md b/pt-PT/step_5.md new file mode 100644 index 0000000..60adef8 --- /dev/null +++ b/pt-PT/step_5.md @@ -0,0 +1,42 @@ +## Treina o modelo + + +
+

+
+ + +Reuniste os exemplos de que precisas, e agora vais usar esses exemplos para treinar o teu modelo de machine learning. + +\--- task --- + +Clica em **< Voltar para o projeto** no canto superior esquerdo. + +Clica em **Aprender & testar**. + +Clica no botão chamado **Treinar um novo modelo de Machine Learning**. Isto pode demorar alguns minutos até acontecer. +![Seta que aponta para um botão a dizer 'Treinar um novo modelo de aprendizagem de máquina'.](images/train-new-model.png) + +\--- /task --- + +Quando o treino acabar, podes testar o quão bem o teu modelo reconhece os teus comandos de voz. + +\--- task --- + +Clica no botão **Começar a ouvir**, e diz "esquerda". + +\--- /task --- + +Se o teu modelo de machine learning reconhecer o comando de voz, vai exibir a previsão do que disseste. +![Seta que aponta para o botão de começar a ouvir.](images/test-your-model.png) + +\--- task --- + +Testa se o modelo também reconhece "para cima", "para baixo" e "direita". + +\--- /task --- + +Se não ficares satisfeito com o funcionamento do modelo, volta à página **Treinar** e adiciona mais exemplos, depois treina o modelo outra vez. + + + diff --git a/pt-PT/step_6.md b/pt-PT/step_6.md new file mode 100644 index 0000000..0c6b5a0 --- /dev/null +++ b/pt-PT/step_6.md @@ -0,0 +1,71 @@ +## Move o peixe + + +
+

+
+ + +Agora que o teu modelo consegue distinguir as palavras, podes usá-lo no programa Scratch para movimentar o peixe pelo ecrã. + +\--- task --- + +Clica no link **< Voltar para o projeto**. + +Clica em **Criar**. + +Clica em **Scratch 3**. + +Clica em **Abrir no Scratch 3**. + +\--- /task --- + +\--- task --- + +Clica em **Project templates** na parte superior e seleciona o projeto "Fish Food" para carregar o ator peixe, que já contém algum código adicionado. + +\--- /task --- + +Machine learning for Kids adicionou alguns blocos especiais ao Scratch para permitir que utilizes o modelo que acabaste de treinar. Encontra-os na última parte da lista de blocos. + +![Uma lista de novos blocos criados pelo Machine Learning for Kids, incluindo instruções como "Começar a ouvir", "Parar de ouvir" e "Quando ouvires esquerda".](images/new-blocks.png) + +\--- task --- + +Com o ator **peixe** selecionado, clica no separador **Código**. Encontra o lugar certo no teu código e adiciona um bloco especial para dizer ao modelo para começar a ouvir. + +![No ator peixe, é adicionado um bloco 'começar a ouvir' após o bloco 'quando alguém clicar em bandeira'.](images/start-listening.png) + +\--- /task --- + +\--- task --- + +Adiciona o código de "para cima" ao ator **Peixe**. +![No ator peixe, é adicionado um bloco "quando ouvires para cima", depois um bloco "apontar na direção 0".](images/starter-code.png) + +\--- /task --- + +\--- task --- + +Observa o código que tens para movimentar o peixe para cima e tenta decifrar o código para movimentar para baixo, esquerda e direita. + +## --- collapse --- + +## title: Mostra-me como + +![Mais três pares de blocos são adicionados: "Quando ouvires esquerda" e "apontar na direção -90"; "Quando ouvires direita" e "apontar na direção 90"; "Quando ouvir para baixo" e "apontar na direção 180".](images/finished-code.png) + +\--- /collapse --- + +\--- /task --- + +\--- task --- + +Clica na **bandeira verde** e diz para cima, para baixo, esquerda ou direita. Verifica se o peixe se movimenta na direção esperada. + +\--- /task --- + + + + + diff --git a/pt-PT/step_7.md b/pt-PT/step_7.md new file mode 100644 index 0000000..b1669d7 --- /dev/null +++ b/pt-PT/step_7.md @@ -0,0 +1,39 @@ +## Desafio + +\--- challenge --- + +\--- task --- + +Adiciona a variável para acompanhar a pontuação e adiciona um ponto cada vez que o peixe comer alguma comida. + +## --- collapse --- + +## title: Mostra-me como + +Adiciona o código destacado por um circulo ao ator **Food**. + +![Código Scratch: Define a pontuação para 0, mostra-te, repete até a posição y < -170, altera y em -3, se tocar em peixe, então altera a pontuação para 1, esconde.](images/score-hint.png) + +\--- /collapse --- + +\--- /task --- + +\--- task --- + +Adiciona um novo ator que não seja comida e deduz pontos se o peixe o comer. + +\--- /task --- + +\--- task --- + +Faz com que a comida caia em diferentes velocidades aleatoriamente. + +\--- /task --- + +\--- task --- + +Ou, se preferires, cria um jogo completamente diferente que usa comandos de voz para controlar o personagem! + +\--- /task --- + +\--- /challenge --- diff --git a/pt-PT/step_8.md b/pt-PT/step_8.md new file mode 100644 index 0000000..0e0cb72 --- /dev/null +++ b/pt-PT/step_8.md @@ -0,0 +1,3 @@ +## O que se segue? + +Existem muitos outros projetos de machine learning e IA em [Machine learning com Scratch](https://projects.raspberrypi.org/en/pathways/scratch-machine-learning). diff --git a/uk-UA/meta.yml b/uk-UA/meta.yml index bd8b36e..fbeb7ca 100644 --- a/uk-UA/meta.yml +++ b/uk-UA/meta.yml @@ -1,4 +1,3 @@ ---- title: Їжа для рибок hero_image: images/banner.png description: Керуй рибкою за допомогою голосу й спрямовуй її до корму @@ -10,14 +9,14 @@ steps: - title: Що ти зробиш - title: Створи проєкт completion: - - engaged + - engaged - title: Фоновий шум - title: Запиши вказівки - title: Натренуй модель - title: Перемісти рибку completion: - - internal + - internal - title: Додаткове завдання challenge: true completion: - - external + - external diff --git a/uk-UA/step_1.md b/uk-UA/step_1.md index 27aac41..3d5042a 100644 --- a/uk-UA/step_1.md +++ b/uk-UA/step_1.md @@ -6,22 +6,21 @@ ![Проєкт Scratch з рибкою-клоуном та пончиком під водою.](images/whatyouwillmake.png) ---- collapse --- +\--- collapse --- --- -title: Де зберігаються мої голосові команди? ---- + +## title: Де зберігаються мої голосові команди? - Цей проєкт використовує технологію під назвою «машинне навчання». Системи машинного навчання навчаються з використанням великої кількості даних. - Для цього проєкту тобі не потрібно створювати обліковий запис або входити в систему. Приклади голосових команд, які ти запишеш для цього проєкту, тимчасово зберігаються у твоєму браузері (тільки на твоєму комп'ютері). ---- /collapse --- +\--- /collapse --- ---- collapse --- ---- -title: Немає доступу до YouTube? Завантаж відео! ---- +## --- collapse --- + +## title: Немає доступу до YouTube? Завантаж відео! -Ти можеш [завантажити всі відео для цього проєкту](https://rpf.io/p/uk-UA/fish-food-go){:target="_blank"}. +Ти можеш [завантажити всі відео для цього проєкту](https://rpf.io/p/en/fish-food-go){:target="_blank"}. ---- /collapse --- +\--- /collapse --- diff --git a/uk-UA/step_2.md b/uk-UA/step_2.md index f2b65d9..cd48837 100644 --- a/uk-UA/step_2.md +++ b/uk-UA/step_2.md @@ -6,15 +6,15 @@ ---- task --- +\--- task --- Перейди на [machinelearningforkids.co.uk](https://machinelearningforkids.co.uk/#!/login){:target="_blank"} у браузері. Натисни **Спробувати**. ---- /task --- +\--- /task --- ---- task --- +\--- task --- Натисни **Проєкти** на панелі меню угорі. @@ -26,18 +26,18 @@ Тепер у списку проєктів має висвітлюватися «Їжа для рибок». Натисни на проєкт. ![Список проєктів із Їжею для рибок.](images/projects-list.png) ---- /task --- +\--- /task --- ---- task --- +\--- task --- Натисни кнопку **Навчити**. -![Головне меню проєкту зі стрілкою, що вказує на кнопку «Навчити»](images/project-train.png) +![Головне меню проєкту зі стрілкою, що вказує на кнопку «Навчити»] (images/project-train.png) Якщо побачиш вікно із запитом на використання мікрофону, натисни **Дозволити під час кожного відвідування**. ![Вікно із запитом на використання мікрофону.](images/allow-microphone.png) ---- /task --- +\--- /task --- diff --git a/uk-UA/step_3.md b/uk-UA/step_3.md index a2b6295..cd1eca7 100644 --- a/uk-UA/step_3.md +++ b/uk-UA/step_3.md @@ -8,7 +8,7 @@ Спочатку тобі потрібно записати зразки фонового шуму. Це допоможе твоїй моделі машинного навчання відрізнити голосові команди від фонового шуму. ---- task --- +\--- task --- Натисни кнопку **+ Додати приклад** у розділі **фоновий шум**. @@ -17,11 +17,11 @@ Натисни кнопку **Додати**, щоб зберегти запис. ---- /task --- +\--- /task --- ---- task --- +\--- task --- Повтори ці кроки, поки не запишеш **принаймні 8 зразків** фонового шуму. ![Корзинка з 8 прикладами фонового шуму.](images/8-background.png) ---- /task --- +\--- /task --- diff --git a/uk-UA/step_4.md b/uk-UA/step_4.md index 417c208..7b0beb4 100644 --- a/uk-UA/step_4.md +++ b/uk-UA/step_4.md @@ -8,35 +8,35 @@ Тепер запиши 8 зразків кожного слова («вгору», «вниз», «ліворуч» та «праворуч»), щоб твоя модель машинного навчання навчилася їх розпізнавати. ---- task --- +\--- task --- Натисни **+ Додати нову мітку** у верхньому правому куті екрану і додай мітку з назвою left ("ліворуч"). ---- /task --- +\--- /task --- ---- task --- +\--- task --- Натисни **+ Додати приклад** у полі з міткою left і запиши, як промовляєш «ліворуч». Повторюй, поки не матимеш **щонайменше 8 зразків**. ---- /task --- +\--- /task --- ---- task --- +\--- task --- Натисни **+ Додати нову мітку** і створи мітку з назвою right («праворуч») і запиши 8 зразків, як промовляєш «праворуч». ---- /task --- +\--- /task --- ---- task --- +\--- task --- Натисни **+ Додати нову мітку** і створи мітку з назвою up («вгору») і запиши 8 зразків, як промовляєш «вгору». ---- /task --- +\--- /task --- ---- task --- +\--- task --- Натисни **+ Додати нову мітку** і створи мітку з назвою down («вниз») і запиши 8 зразків, як промовляєш «вниз». ---- /task --- +\--- /task --- diff --git a/uk-UA/step_5.md b/uk-UA/step_5.md index e6c83cb..7592c84 100644 --- a/uk-UA/step_5.md +++ b/uk-UA/step_5.md @@ -8,33 +8,33 @@ Необхідні зразки зібрано, тепер тобі треба їх використати, щоб натренувати свою модель машинного навчання. ---- task --- +\--- task --- Натисни **< Назад до проєкту** у верхньому лівому куті. -Натисни **Дізнатися та перевірити**. +Натисни \*_Дізнатися та перевірити_. Натисни кнопку **Навчання нової моделі машинного навчання**. Це може зайняти кілька хвилин. ![Стрілка вказує на кнопку 'Навчання нової моделі машинного навчання'.](images/train-new-model.png) ---- /task --- +\--- /task --- Після завершення навчання ти можеш перевірити, як добре твоя модель розпізнає голосові команди. ---- task --- +\--- task --- Натисни **Почати слухати**, і скажи «ліворуч». ---- /task --- +\--- /task --- Якщо твоя модель машинного навчання розпізнає слово, то вона покаже свій прогноз щодо цього слова. ![Стрілка вказує на кнопку почати слухати.](images/test-your-model.png) ---- task --- +\--- task --- Також перевір, чи розпізнає модель слова «вгору», «вниз» та «праворуч». ---- /task --- +\--- /task --- Якщо поведінка моделі не є задовільною, то повернись на сторінку **Навчити** і додай більше зразків, а потім знову натренуй свою модель. diff --git a/uk-UA/step_6.md b/uk-UA/step_6.md index 122d52e..1deed35 100644 --- a/uk-UA/step_6.md +++ b/uk-UA/step_6.md @@ -8,7 +8,7 @@ Тепер, коли твоя модель може розрізняти команди, ти можеш використати її у програмі «Скретч», щоб пересувати рибку на екрані. ---- task --- +\--- task --- Натисни **< Назад до проєкту**. @@ -18,53 +18,52 @@ Натисни **Відкрити в Scratch 3**. ---- /task --- +\--- /task --- ---- task --- +\--- task --- Натисни **Шаблони проєктів** угорі та вибери проєкт «Їжа для рибок», щоб завантажити спрайт рибки, до якого вже додано певний код. ---- /task --- +\--- /task --- Machine Learning for Kids додали до Скретчу деякі спеціальні блоки, які дозволяють використовувати щойно навчену модель. Знайди їх внизу списку з блоками. ![Список нових блоків, створених програмою Машинне навчання для дітей, включно з такими інструкціями, як «Почніть слухати», «Припинити слухати» та «When I hear left» (Коли я чую ліворуч).](images/new-blocks.png) ---- task --- +\--- task --- Обери спрайт **рибки** та натисни на вкладку **Код**. Знайди правильне місце у коді та додай спеціальний блок, щоби модель почала слухати. ![У спрайті рибки додано блок «почніть слухати» після блоку «коли натиснуто прапорець».](images/start-listening.png) ---- /task --- +\--- /task --- ---- task --- +\--- task --- Додай код для вказівки «вгору» до спрайта **рибки**. ![У спрайті рибки додано блок «When I hear up» (Коли я чую вгору), а потім блок «повернути в напрямку 0».](images/starter-code.png) ---- /task --- +\--- /task --- ---- task --- +\--- task --- Подивися на код, який потрібен для переміщення рибки вгору, а потім спробуй розібратися з кодом для переміщення вниз, вліво та вправо. ---- collapse --- ---- -title: Як це зробити ---- +## --- collapse --- + +## title: Як це зробити ![Додано ще три пари блоків: «When I hear left» (Коли я чую ліворуч) та «повернути в напрямку -90»; «When I hear right» (Коли я чую праворуч) та «повернути в напрямку 90»; «When I hear down» (Коли я чую вниз) та «повернути в напрямку 180».](images/finished-code.png) ---- /collapse --- +\--- /collapse --- ---- /task --- +\--- /task --- ---- task --- +\--- task --- Натисни на **зелений прапорець** та промов «вгору», «вниз», «ліворуч» чи «праворуч». Перевір, чи рухається рибка у правильному напрямку. ---- /task --- +\--- /task --- diff --git a/uk-UA/step_7.md b/uk-UA/step_7.md index cd14a3f..e8b5c25 100644 --- a/uk-UA/step_7.md +++ b/uk-UA/step_7.md @@ -1,40 +1,39 @@ ## Додаткове завдання ---- challenge --- +\--- challenge --- ---- task --- +\--- task --- Додай змінну для відстеження рахунку і здобувай бал щоразу, коли рибка з'їдає корм. ---- collapse --- ---- -title: Як це зробити ---- +## --- collapse --- + +## title: Як це зробити Додай виділений червоним код до спрайта **Їжа**. ![Код Scratch: Надати рахунок значення 0, показати, повторити до значення y < -170, змінити y на -3, якщо торкаєтеся рибки, то змінити рахунок на 1, сховати.](images/score-hint.png) ---- /collapse --- +\--- /collapse --- ---- /task --- +\--- /task --- ---- task --- +\--- task --- Додай новий неїстівний спрайт, і віднімай очки, якщо рибка його їсть. ---- /task --- +\--- /task --- ---- task --- +\--- task --- Задай різну швидкість падіння їжі. ---- /task --- +\--- /task --- ---- task --- +\--- task --- Або якщо хочеш, створи зовсім іншу гру, яка використовує голосові команди для керування персонажем! ---- /task --- +\--- /task --- ---- /challenge --- +\--- /challenge --- diff --git a/uk-UA/step_8.md b/uk-UA/step_8.md index c18b3f1..e50e922 100644 --- a/uk-UA/step_8.md +++ b/uk-UA/step_8.md @@ -1,8 +1,3 @@ ## Що робити далі? -У напрямі [«Машинне навчання і Scratch»](https://projects.raspberrypi.org/uk-UA/pathways/scratch-machine-learning) є багато інших проєктів про машинне навчання та ШІ. - -*** -Цей проєкт переклали волонтери. - -Завдяки волонтерам ми надаємо можливість людям у всьому світі навчатися рідною мовою. Ви також можете допомогти нам у цьому — більше інформації про волонтерську програму на [rpf.io/translate](https://rpf.io/translate). +У напрямі [«Машинне навчання і Scratch»](https://projects.raspberrypi.org/en/pathways/scratch-machine-learning) є багато інших проєктів про машинне навчання та ШІ.