A "fun" little homelab experiment that I had to debug: over the last week, I set up a private Piped instance for myself so that I can watch YouTube videos without input from the algorithm: https://github.com/TeamPiped/Piped
-
A "fun" little homelab experiment that I had to debug: over the last week, I set up a private Piped instance for myself so that I can watch YouTube videos without input from the algorithm: https://github.com/TeamPiped/Piped
It was pretty easy to set up, and importing my subscriptions worked flawlessly. But a few hours later, I discovered an issue: the import function scrapes YouTube for the latest videos from every subscribed channel all at once. But updating those subscriptions uses a different function that will only work if the Piped instance is an internet-accessible enpoint (like a public instance would be) and given that my instance has no public endpoint (it's accessible only on my Tailscale network), Google couldn't send me subscription updates. So my feed was permanently frozen in time at the moment of that first successful import.
To fix this, I extracted the authentication token from Piped (it's in their documentation) and wrote a small bash script (or, about 20 small bash scripts while I was debugging) that will run on a schedule using my Synology NAS' task manager. The script:
- Uses a docker command to forcibly wipe Piped's subscription database in the backend
- Opens a JSON file with my exported YouTube subscriptions
- Strips out the majority of the data leaving only the channel ID with exactly 24 characters (this took forever to figure out)
- Hands those IDs to the Piped backend container to process
- Piped receives the subscriptions and begins the import process as if it were for the first timeThe result of this is that Piped fetches new subscription data using its import process rather than its true subscription feed process. And so now it works without my having to expose a public-facing endpoint. The only downside is that if I subscribe to a new channel, I need to make a new JSON export to ensure that it's imported during the next task execution. Using Piped, that takes about a minute, so it's not a huge deal.
But hey, it works!