Back to Timeline

Spinning Up Chaos: My Little Coding Corner Is Born

So I decided to add this new little coding space to the blog, I’m hoping it will help separate my posts from daily life posts on the homepage.

I’m currently working on two projects aside from my blog, both of which consist of a mix of python (Django) and PHP (Laravel.) I’ve found that doing heavy lifting operations using python that may also require multithreading or is generally task intensive is best, and then communicating with the UI Laravel site via a internal API which is exposed to the internet but only accepts API requests from the internal network of the docker container.

I’ve recently been trying out playwright for the first time and was impressed in it’s ability to bypass bot protection, my initial need was just to scrape the first 10 pages of each Google search for certain keywords. But I soon found out that even playwright can’t make searches on Google without getting flagged, was the only site in which it did though. From what I found out the possibility of botting google searches is slim to none, and my use case isn’t great enough to try spending time on working it out, so I’m considering just paying to use Scraper API’s which make the Google search for you and reports back with the results in a json result. My use as time goes on though will increase so even just going of the bare minimum amount of searches I would want to make, 10 pages per keyword, 5 keywords per day. That’s 50 pages/searches a day which would amount to 1500 searches a month, which would add up quickly in costs. I did find a open source project that somehow spins up an internal API via docker image, but I have no idea whether it works or how as it would at least need proxies, going to give it a shot after this post.

The playwright script did still have some use though, as I found I can use it for an account creator to make accounts for a platform that I need, so that will still come in handy. I always prefer HTTP request scripts where possible, but HTTP requests are much easier to detect as bot activity and I’m surprised I’ve manage to create half the things I have using HTTP requests, but essentially it just comes down to reverse engineering the requests and learning how to circumvent bot protection that any site may have implemented. Hopefully this weird and wonderful GitHub project will solve my Google search issue and hoping I’ll get a good amount of issues ironed out today even though I haven’t slept since yesterday.


Stitch · Dec 09, 2025 · 1 week ago
Published