Willow as a websocket server and integration with Home Assistant. #358
Replies: 1 comment 1 reply
-
I am curious if you continued working on this? After reading your post, it makes a lot of sense to me to use the ESP box directly with HA, without requiring a middleman (WAS). Have you compared between what you are using, and the WAS? As in, is the accuracy higher with WAS? I think that was the point of the dev developing it, to use the GPU running an LLM to process language more powerfully than HA running, say, on a Raspi. If your method is just as accurate though (and fast), then it would call into question why WAS is needed at all. That is my question ultimately, do we need WAS, if your method is working 1:1 accurately. And if so, then I would love to see a HA integration from you to offer direct voice assistance via the ESP box devices. That would certainly help many more people get into the "Voice Assistant" realm with just a simple purchase, without needing all the additional setup (WAS, etc). |
Beta Was this translation helpful? Give feedback.
-
Before, I know about WAS/WIS and other projects, and they're really cool, but I don't need them :) Here I am looking at a direct connection to Willow from HA.
I have a draft integration with HA via websocket server running on Willow.
I made it half a year ago, when I first heard about Willow, but I only came back to it now, and updated it to the current version of the Willow source code.
The voice data is sent through websocket both ways. This reduces the possible connection setup delays I was getting with other existing options.
HA connects to willow, so no access tokens are needed from HA to work with it. This is very important to me because my HA instance is published on the internet and can be accessed with a token, and HA doesn't involve any customization of access rights for tokens. This is one of the reasons why I try to avoid integrations of this kind. Also, no additional ports are needed on the HA side, since it is the initiator of the connection.
A custom integration is written for HA, which implements assist pepeline and media player.
Responses from assist pepeline are played via websocket, and media player starts the playback via http link. The media player facilitates playback of anything by HA.
And here I turn to the Willow authors: would you be interested in this in the mainline? First of all, I mean the concept that Willow will be able to work without other services (well, except for WAS, but for me I don't see the need in it, except for the first configuration, that's already details) and the most important thing for me is to be in the role of a server.
I'm at a fork in the road right now, if the authors don't need it, it will most likely go on the shelf and be used locally at my place. Maybe I'll start a fork someday (laughs).
Otherwise, if you're willing to accept the functionality in the mainline, then I'll be getting the Willow code and HA integration ready to publish.
Video of a usage example, you can try to estimate the command and response latency.
Beta Was this translation helpful? Give feedback.
All reactions