March 2026
From Twitter API to Browser Automation
Persona: The Storyteller (blended with The Builder) | Audience: indie hackers, technical builders, and curious automators | Format: blog post
I thought this was going to be a nice, tidy automation project.
You know the type: open the docs, grab an API key, write a little script, schedule it, feel smug for about fifteen minutes, then move on with life.
Instead, my Twitter/X automation journey turned into a very 2026 lesson in modern software reality: the official path is not always the practical path.
I started with the obvious idea. I wanted to automate posting and cleanup for the @wooderson_ai account. Nothing especially wild. Just normal builder stuff: posting updates, clearing out stale replies, pruning old tweets that no longer made sense, and generally keeping the account from becoming a junk drawer with a profile picture.
In my head, this was an API problem.
It was, technically. Just not in the fun way.
The nice clean API plan
My first instinct was the same one most developers would have: use the official Twitter/X API, do things properly, avoid brittle hacks, and build something boring and reliable.
And to be fair, the API itself is not the villain here. The docs are readable. The surface area mostly makes sense. The tooling ecosystem is decent. I even got some of it working without too much drama.
The problem was not "can this be done with the API?"
The problem was "does this make any economic sense for this workflow?"
That was the moment the whole plan started wobbling.
For a side-project-ish automation setup, I did not need enterprise-grade access. I needed basic posting, occasional cleanup, and enough control to keep the account from accumulating stale junk. Instead, I kept running into the same answer over and over: the write-capable path was gated behind pricing that felt wildly out of proportion to what I was trying to do.
So I did what many developers do when faced with a bad platform constraint: I spent a weekend trying to convince myself it was still worth it.
I wired up tools. I tested wrappers. I poked at scripts. I had a brief and slightly embarrassing phase where I kept thinking, "Maybe I’m just one config fix away from this becoming reasonable."
I was not one config fix away.
I was one mindset shift away.
The annoying but useful realization
At some point the obvious thing finally landed:
If I can do the workflow in a browser, I can automate the workflow in a browser.
That’s not a profound statement. It’s just one of those truths you resist for a while because the API route feels more respectable. Cleaner. More official. Less cursed.
But respectable does not get the job done.
For this particular problem, browser automation was the more honest tool. The account already works through the web app. The actions I needed already existed in the UI. The main question was whether I wanted to keep fighting pricing and access constraints, or just automate the path that was already available to me.
That was the pivot point.
Why Playwright won
Once I committed to browser automation, the project got simpler conceptually and more complex operationally.
Conceptually, it was great. Playwright let me act like a patient, very literal internet butler. Open the site. Use the existing authenticated session. Find the compose box. Type the post. Submit it. Confirm the action worked.
No lobbying a platform for permission. No pretending this needed to be a "real integration" just to post a message or remove old content.
Operationally, though, browser automation comes with a different class of problems.
The API usually breaks with status codes and documentation. Browser automation breaks because some frontend engineer renamed a class, moved a menu, changed a button label, or wrapped your target in three extra divs because apparently chaos is a design system now.
So the trade changed.
I was no longer paying in subscription cost. I was paying in maintenance and defensive engineering.
Honestly? For this workflow, that was a good trade.
The real work was not posting. It was trust.
Getting a tweet to post through Playwright is not the hard part. That’s just a sequence.
The hard part is building a system you actually trust.
Posting is annoying when it fails. Deletion is worse, because deletion is the kind of automation where one bad assumption can turn into a very stupid afternoon.
That’s where the project matured a bit.
I ended up treating the automation less like a quick script and more like a small reliability problem:
- extract and refresh the browser auth state cleanly
- verify the UI still looks the way the script expects
- separate posting and cleanup concerns
- add test-first checks before anything destructive runs
- log enough that future-me can understand what happened without reenacting the crime scene
That last part matters more than I’d like to admit.
There is a special kind of developer sadness that comes from opening a "simple" automation script you wrote two weeks ago and realizing it now reads like a ransom note from your past self.
Deleting tweets forced the architecture to get better
The cleanup side of this project is what really pushed me from "script" to "system."
I wanted a reliable way to delete tweets and stale replies as part of routine account maintenance. In theory, this sounds even more API-shaped than posting does. In practice, I hit the same wall again: access, limitations, friction, and the general feeling that I was being asked to overpay for the privilege of tidying my own account.
So deletion also moved into the browser automation world.
That meant I needed much stronger safeguards.
A posting script can be a little scrappy. A deletion script should be mildly paranoid.
The final workflow ended up with a separate verification step that checks selectors and UI behavior before the actual delete path runs. That might sound excessive until the first time the site changes underneath you. Then it sounds like the only adult decision in the room.
That test-first layer paid for itself quickly. It caught UI drift. It caught expired auth. It turned "why did this silently fail?" into a shorter and less irritating debugging session.
It also made the whole thing feel more usable. Not elegant, exactly. But reliable enough that I wasn’t hovering over it like a nervous air traffic controller every time I ran cleanup.
Tradeoffs, without pretending there aren’t any
I don’t think browser automation is universally better than an API. It clearly isn’t.
If the API access is sane, affordable, and supports the workflow cleanly, I’d rather use the API almost every time. It’s usually more stable, more inspectable, and less vulnerable to frontend churn.
But that wasn’t the actual choice here.
The actual choice was:
- keep trying to force the official route into a workflow it no longer fit well, or
- automate the path that was already proven to work in the browser
For this project, Playwright was the more reliable route because it aligned with reality.
That’s the part I think people sometimes miss. Reliability is not just about technical purity. It’s about whether the whole system — access, pricing, maintenance, control, and failure modes — makes sense for the job.
The API looked cleaner on paper. Browser automation was cleaner in practice.
Weird sentence, but true.
What I’d do now
If I were starting from scratch today, I would skip the long phase of trying to make the official path feel emotionally acceptable.
I’d evaluate it quickly, confirm the constraints, and move straight to browser automation for this specific use case.
Not because it’s glamorous. It absolutely is not. Browser automation has big "duct tape, but with logging" energy.
But it works.
And more importantly, it works on terms that fit the actual project.
That’s become a recurring pattern in my work generally, whether I’m building tooling, shipping products, or cleaning up weird little systems around wooderson.ai. The best solution is often not the most official one. It’s the one whose tradeoffs you can live with on a random Tuesday when something breaks and you still have other work to do.
That was the journey here: I started wanting a neat API integration and ended up with a browser automation system that was a little messier, a little more defensive, and a lot more useful.
Not the architecture I would have picked in a perfect world.
Definitely the one I’d pick again in this one.