Hello there!

This is a site for stuff I write that doesn’t fit in 280 characters or simple images.

Bambu Lab Store Filament Tracker

I’ve been running a Bambu Lab Store filament tracker at bbltracker.com.

bbltracker dashboard

I recently got into 3D printing. A lot of what I print is practical: last-millimeter adapters, things like robot vacuum ramps around the house, and other tiny fixes that make daily life less annoying.

Bambu Lab released the P2S, their flagship midrange printer, and I picked one up after hearing how polished their ecosystem is. People call them the Apple of 3D printers, and I can see why. The hardware is solid, but the software and ecosystem are really what make it shine.

There is also a bit of a format war around filament. Bambu sells filament with RFID tags that are not easy for third parties to mimic. The RFID flow makes loading filament easy and automatically applies Bambu-tuned profiles. Some people feel those profiles run too fast, but for my functional prints, they have worked great.

Where the frustration started

I like having color options, and color can be functional too. But stock has been rough for a while, even after the holiday season.

That rough stock situation (and the need for a tracker) is really a symptom of how popular Bambu has gotten, especially over the holidays:

Handy chart showing Bambu popularity spike over the holidays

Source: Reddit comment

Then there is the constant bulk-sale mechanic on the Bambu Lab Store: buy 4+, get a strong discount, buy more, save more. In practice, that falls apart when the combinations you want are never all in stock at the same time.

At the time, the store UI also made this harder than it needed to be. Out-of-stock options were not clearly crossed out, so building a cart felt like guesswork.

That made me angry, but in a constructive way. So I built a tracker.

What the tracker does

At a high level, the tracker monitors store inventory over time, keeps historical snapshots, and renders a dashboard so I can quickly answer:

  • What is actually in stock right now?
  • What appears stable versus constantly draining?
  • What was the last major restock spike?
  • Are there ETA hints for restocks?
  • Is this likely to go out of stock soon?

I started with only the US store, then expanded to EU, CA, UK, AU, and Asia coverage.

I also reused a stock-quantity verification trick I had learned while documenting openpilot’s Toyota security key journey.

Building it fast (and keeping it moving)

I used a lot of Google Antigravity while building this. Web scraping is tedious work, and large language models helped a lot with iteration speed and the constant tweaks.

The codebase is not pretty in every corner, but LLMs have made it very workable to maintain and improve continuously.

Later, I added depletion-rate estimation so the tracker can make a rough guess about when something might go out of stock.

Things that happened after launch

After I released it and posted it on /r/BambuLab, a bunch of interesting things happened:

  • Bambu nerfed the original stock-count hole the tracker used.
  • For a while, counts were capped at 200 in the US view, which reduced visibility.
  • Later, that cap moved to 400 in the US store. Maybe my tracker’s utility was actually a factor in that decision? I do see traffic from China, where there is no Bambu Lab Store of the same codebase as far as I can see.
  • Bambu improved their UI and added clearer out-of-stock cross-outs.
  • I added more quality-of-life indicators, including “stable stock,” ETA context, and restock-spike visibility.
  • I added a GoFundMe and a nice patron funded a custom domain and hosting for the tracker.

Backend at a glance

Keeping this online reliably has mostly been a deployment and operations problem:

If you print a lot and buy filament in batches, this has been much better than manually refreshing product pages and hoping for the best.

8:10 pm / bambu , filament , duckdb , typescript , cloud run , cloudflare pages

Surviving a power outage with fiber ONT and PoE setup (Frontier FOX222)

fox222

power supply

For some reason, it’s been a bit hard to find information on how to setup my particular fiber ONT so that it can survive a power outage with a PoE setup.

While of course, one could power a whole house with solar and batteries, I wanted to find a way to keep my fiber ONT and router powered during a power outage without having to invest in a full home battery system.

Costco recently had on sale a small EcoFlow RIVER 3 Plus Portable Power Station, which I installed into my home network setup with the power station being next to my router.

It’s been wonderful as an UPS!

My setup is a bit peculiar as my Frontier FOX222 ONT is a bit far from the power station, but the power station is next to the router and there is an Ethernet cable running from the ONT to the router. The RIVER 3 is also next to the router.

The only way to provide power to the ONT without also having to plumb AC backed by the power station: A PoE injector on the Ethernet cable between the ONT and the router and a splitter at the ONT end to convert the PoE to 16V DC power for the ONT.

[... 764 words]

6:12 pm / fiber , ont , poe , setup , power outage , frontier , ecoflow , ups

Talky Pet Watcher, a cat watching AI

Cat Watch

Github Project: talky_pet_watcher

I’m allergic to cats 😢. But a friend asked me to watch their cat. My family was more than willing to help but to be safe, I wanted to also monitor the cat to make sure she didn’t get into any shenanigans that were too much.

With some cheap TP-Link Tapo cameras, I set up a system to watch the cat remotely and ensure she was safe while I could not be next to her. It’s a Telegram bot that also sends me a message whenever the cat is doing something interesting.

It watches multiple cameras and tries to aggregate the data to provide insights on the cat’s behavior.

Here’s the project description:

Talky Pet Watcher is a tool to watch a series of ONVIF webcams and their streams. It captures video clips, uploads them to Google AI for processing, and uses a generative model to create captions and identify relevant clips. The results are then delivered to a Telegram channel. This project is designed to help pet owners keep an eye on their pets and share interesting moments with friends.

This was slapped together in a few days, as I only watched the cat for two weeks.

I was also interested in Google Gemini and how cheap they claimed to be for analyzing video. I figured that this would be a good way to test it out.

What went well:

  • The system was able to capture interesting moments effectively.
  • It was able to concantate multiple camera’s POV to provide interesting views.
  • The cameras were cheap!
  • Putting the results on Telegram was a good way to share the results with the cat’s owner and friends.

What did not go well (and there are many!):

  • I did not implement history, so the system basically reported the same thing over and over again.
  • Getting reliable clips from multiple cameras when motion was detected was iffy due to the TP-Link Tapo camera’s iffy web servers. It would stall and halt a lot.
  • The Google Gemini AI could only analyze video clips at 1FPS. For an agile kitten, this can sometimes make wrong assumptions about what the cat is doing.
  • Implementation wise, bun was very crashy and I had to restart it a lot.
  • The connection to the camera also had to be restarted a lot.

If I had to do it again:

  • Use something that can pull from a NVR by relative timestamps. There was some Rust NVR but it was too complicated for me to set up in the small amount of time I had.
  • Implement a history system and maybe some memory bank system.
  • And so so much more! 😅

6:08 am / catte , google_gemini , vision , telegram

comma three faux touch keyboard

🦾 comma three Faux-Touch keyboard

Long arms for those of us with short arms from birth or those who can’t afford arm extension surgery!

touchkey keyboard demo

Built a faux-touch keyboard for the comma three using a CH552G microcontroller. The keyboard is off the shelf and reprogrammed to emulat a touchscreen.

It’s a bit of a hack but it certainly beats gorilla arm syndrome while driving.

For $5 of hardware, it’s very hard to beat.

The journey started in January 2024 and ended in May 2024. Though it, I had to struggle with:

  • CH552G programming
  • Linux USB HID Touchscreen protocol
  • Instructions and documentation

It’s also been through that period of unintended usage. The Frogpilot project has had users who were surprisingly interested in using it as an alternative for some GM vehicles where there is no ACC button. They use the keyboard to touch the screen where ACC would be.

Part of the work also meant upstreaming to the the CH552G community. While I doubt it’ll have any users, it’s nice to give back to the community. There is now a touchscreen library for the CH552G.

The project is open source and can be found at:

https://github.com/nelsonjchen/c3-faux-touch-keyboard

There’s a nice readme that explains how to build and flash the firmware.

10:02 pm / openpilot , replay , ch552g , comma_ai , hardware

Replicate.com openpilot Replay Clipper

Web capture_8-11-2023_9551_replicate com

The replay clipper has been ported to Replicate.com!

https://replicate.com/nelsonjchen/op-replay-clipper

Along with it comes a slew of upgrades and updates:

  • GPU accelerated decoding, rendering, and encoding. NVIDIA GPUs are provided to the Replicate environment and greatly speed up the clipper.
  • Rapid fast downloading of clips. Instead of relying on replay to handle downloads sequentially in a non-parallel manner, we use the parfive library to download underlying data in parallel.
  • Comma Connect URL Input. No need to mentally calculate the starting time and length. Just copy and paste the URL from Comma Connect.
  • Video/UI-less mode. Don’t want UI? You can have it.
  • 360 mode. Render 360 clips
  • Forward Upon Wide. Render clips with the forward clip upon the wide clip
  • Richer error messages to help pinpoint issues.
  • No more having to manage GitHub Codespaces. Replicate handles all the setup and cleanup for you.

Unfortunately, there is a cost. It’s a very small cost but technically Replicate.com is not free. Expect to drop a cent per render. Thankfully, you have a lot a trial credits to burn through and the clipper can run on a free-ish tier.

10:28 pm / openpilot , replay , replicate , openpilot_e2e_long , comma_ai

Colorized-Project: A Project-Color restoration.

screenshot_c235a4a9-429b-4564-bfef-24e4f12bd9a8

GitHub Link: https://github.com/nelsonjchen/Colorized-Project

On IntelliJ IDEs, I’ve been using Project-Color to set a color for each project. This is useful for me because I have a lot of projects open at once, and it’s nice to be able to quickly identify which project is which.

Unfortunately, Project-Color hasn’t been updated in a while, and it doesn’t work with the latest versions of IntelliJ.

After a few months of using IntelliJ without Project-Color, I decided to try to fix it. I’ve released the result as Colorized-Project.

Thre are a few rough edges, but it’s mostly functional. I’ve been using it for a few weeks now, and it’s been working well. I’ve also submitted it to the JetBrains plugin repository as well.

It still needs some polish like a nicer icon, and I’d like to add a few more features.

10:28 pm / intellij , kotlin

GitHub Wiki SEE: Search Engine Enablement: Year One

It’s been near a year since I supercharged GitHub Wiki SEE with dynamically generating sitemaps.

Since then a few things have changed:

  • GitHub has started to permit a select set of wikis to be indexed.
  • They have not budged on letting un-publically editable wikis be indexed.
  • There is now a star requirement of about 500, and it appears to be steadily decreasing.
  • Many projects have reacted by moving or configuring their wikis to indexable platforms.

As a result, traffic to GitHub Wiki SEE has dropped off dramatically.

stats

This is a good thing, as it means that GitHub is moving in the right direction.

I’m still going to keep the service up, as it’s still useful for wikis that are not yet indexed and there are still about 400,000 wikis that aren’t indexed.

Hopefully GitHub will continue to move in the right direction and allow all wikis to be indexed.

9:04 pm / github , wiki , seo , flyio

openpilot Replay Clipper

End to End Longitudinal Control is currently an “extremely alpha” feature in openpilot that is the gateway to future functionality such as stopping for traffic lights and stop signs.

Problem is, it’s hard to describe its current deficiencies without a video.

So I made a tool to help make it easier to share clips of this functionality with a view into what openpilot is seeing, and thinking.

GitHub repository

It’s a bit heavy in resource use though. I was thinking of making it into a web service but I simply do not have enough time. So I made instructions for others to run it on services like DigitalOcean, where it is cheap.

It is composed of a shell script and a Docker setup that fires up a bunch of processes and then kills them all when done.

Hopefully this leads to more interesting clips being shared, and more feedback on the models that comma.ai can use.

I later also ported this to Replicate here.

7:42 pm / openpilot , replay , docker , docker-compose , digitalocean , openpilot_e2e_long , comma_ai

Datasette-lite sqlite-httpvfs experiment POC with the California Unclaimed Property database

screenshot

There’s a web browser version of datasette called datasette-lite which runs on Python ported to WASM with Pyodide which can load SQLite databases. I grafted the enhanced lazyFile implementation from emscripten and then from this implementation to datasette-lite relatively recently as a curious test. Threw in a 18GB CSV from CA’s unclaimed property records here

https://www.sco.ca.gov/upd_download_property_records.html

into a FTS5 Sqlite Database which came out to about 28GB after processing:

POC, non-merging Log/Draft PR for the hack:

https://github.com/simonw/datasette-lite/pull/49

You can run queries through to datasette-lite if you URL hack into it and just get to the query dialog, browsing is kind of a dud at the moment since datasette runs a count(*) which downloads everything.

Elon Musk’s CA Unclaimed Property

Still, not bad for a $0.42/mo hostable cached CDN’d read-only database. It’s on Cloudflare R2, so there’s no BW costs.

Celebrity gawking is one thing, but the real, useful thing that this can do is search by address. If you aren’t sure of the names, such as people having multiple names or nicknames, you can search by address and get a list of properties at a location. This is one thing that the California Unclaimed Property site can’t do.

I am thinking of making this more proper when R2 introduces lifecycle rules to delete old dumps. I could automate the process of dumping with GitHub Actions but I would like R2 to handle cleanup.

8:36 pm / datasette , datasette-lite , sqlite-httpvfs , experiment , sqlite , cloudflare , r2

Released Gargantuan Takeout Rocket

Finally released GTR or Gargantuan Takeout Rocket.

GitHub repository


Gargantuan Takeout Rocket (GTR) is a toolkit of guides and software to help you take out your data from Google Takeout and put it somewhere else safe easily, periodically, and fast to make it easy to do the right thing of backing up your Google account and related services such as your YouTube account or Google Photos periodically.


It took a lot of time to research and to workaround the issues I found in Azure.

I also needed to take apart browser extensions and implement my own browser extension to facilitate the transfers.

Cloudflare Workers was also used to work around issues with Azure’s API.

All this combined, I was able to takeout 1.25TB of data in 3 minutes.

Now I’m showing it around on Twitter, Discord, Reddit, and more to users who have used VPSes to backup Google Takeout or have expressed dismay at the lack of options for users who are looking to store archives on the cheap. The response from people who’ve opted to stay on Google has been good!

There is also a project page with additional details here.

9:04 pm / google , takeout , azure , cloudflare , chrome , extensions , dataportability , backup

Projects