Blocking Bots Considered Harmful

To Serve and Protect One of the key responsibilities of infrastructure teams is protecting services from unwanted traffic. We implement various mechanisms to counter synthetic HTTP requests: rate-limiting, geo-blocking, Web Application Firewalls, challenges, CAPTCHAs. We routinely analyze logs, respond to traffic spikes, and prevent scraping. We ensure our services are primarily accessed by humans and a limited group of trusted bots. The principle has always been simple: the web should be built for humans.

Why do we wait for search results?

Searching digital data is a daily task in our lives, allowing us to navigate vast amounts of information.

It is both common and complex, involving a wide range of search engines, indexing methods, search algorithms, and implementations.

Typically, searches are performed remotely - on some server, database or a cloud service. We are accustomed to waiting for search results.

/posts/20240229-client-side-search/robo2.png

What if we could get search results instantly?

Expected changes in S3 ACL behavior with Terraform

At the end of April, AWS made changes to the default settings for S3 buckets. Newly created buckets have the ACL mechanism deactivated by default. This may cause errors for Terraform users who have not explicitly enabled ACLs on their buckets, or are configuring the buckets in the “old way”.