Different approach to AI wearables: running an agent natively on Galaxy Watch #5453
ThinkOffApp
started this conversation in
General
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Interesting to see omi's approach with custom hardware for an AI wearable. We took a different path with ClawWatch (https://github.com/ThinkOffApp/ClawWatch): running an intelligent agent directly on off-the-shelf Galaxy Watch hardware.
The stack: NullClaw (2.8 MB static ARM binary written in Zig) handles the agent runtime on the watch itself. Vosk provides offline speech recognition so voice input works without streaming audio to a server. The watch's built-in TTS handles output. LLM inference goes through an OpenClaw gateway to whatever model backend you connect (Claude, GPT, Gemini, etc.).
The tradeoff vs. omi's approach: we don't need custom hardware, but we're limited to what the watch can do natively. It can do a lot though and has lots of sensors which enable the agent to stay in touch with the user's biological state, for example. So being physically in touch with the user it can provide a lot more personalised service than an agent running in the cloud.
Would be curious how omi handles the latency between audio capture and LLM response. On the watch we see about 2-3 seconds end-to-end with the gateway hop.
Beta Was this translation helpful? Give feedback.
All reactions