-
In similar vein, I’d always thought Redka (https://github.com/nalgeon/redka) was a neat idea since it gives you access to a subset of the Redis API backed by either SQLite or Postgres
-
Stream
Stream - Scalable APIs for Chat, Feeds, Moderation, & Video. Stream helps developers build engaging apps that scale to millions with performant and flexible Chat, Feeds, Moderation, and Video APIs and SDKs powered by a global edge network and enterprise-grade infrastructure.
-
rueidis
A fast Golang Redis client that supports Client Side Caching, Auto Pipelining, Generics OM, RedisJSON, RedisBloom, RediSearch, etc.
It's testing RTT rather than the peak throughput of Redis.
I'd suggest using Redis pipelining -- or better: using the excellent rueidis redis client which performs auto-pipelining. Wouldn't be surprising to see a 10x performance boost.
https://github.com/redis/rueidis
-
-
Solid Cache, default in Rails 8.
Doesn’t require SQLite.
Works with other DBs:
https://github.com/rails/solid_cache
-
I've not tried it myself but I believe that's what pgtune does: https://github.com/gregs1104/pgtune
-
> This is the kind of thing that is "hard and tedious" for only about five minutes of LLM query or web search time
not even! if you don't need to go super deep with tablespace configs or advanced replication right away, pgtune will get you to a pretty good spot in the time it takes to fill out a form.
https://pgtune.leopard.in.ua/
https://github.com/le0pard/pgtune
-
That doesn't reflect reality. Most software in production runs off of defaults. The onus is on authors to make their publications as productive as possible. For all types of software, the rest of us aren't going to flame chart every little feature, or drop down into assembly.
I want benchmarks that represent reality, not people gaming things. That's how TechEmpower's Framework Benchmarks got to where they were, and now it's a sea of data that's frustrating to read through, because most of it is noise.
By comparison, here's a set of three tests across all of the most popular programming languages: https://github.com/andrewmcwattersandco/programming-language...
It tests empty programs, creating records, and parsing JSON. It uses idiomatic programming for each of the programming languages, as a regular developer would write that code, or in the case of JSON parsing, it uses the de facto solutions most developers would use.
And you can see the results align closely to developers' expectations, but now quantified. Developers know C++ is faster than Python. OK, but by how much? Oh, 4 orders of magnitude? That's significant for basic operations. Something can be taken away from that.
Good benchmarks do this. They use defaults, they don't game anything, and they approach usage the way most users would.
-
InfluxDB
InfluxDB – Built for High-Performance Time Series Workloads. InfluxDB 3 OSS is now GA. Transform, enrich, and act on time series data directly in the database. Automate critical tasks and eliminate the need to move data externally. Download now.
-
That doesn't reflect reality. Most software in production runs off of defaults. The onus is on authors to make their publications as productive as possible. For all types of software, the rest of us aren't going to flame chart every little feature, or drop down into assembly.
I want benchmarks that represent reality, not people gaming things. That's how TechEmpower's Framework Benchmarks got to where they were, and now it's a sea of data that's frustrating to read through, because most of it is noise.
By comparison, here's a set of three tests across all of the most popular programming languages: https://github.com/andrewmcwattersandco/programming-language...
It tests empty programs, creating records, and parsing JSON. It uses idiomatic programming for each of the programming languages, as a regular developer would write that code, or in the case of JSON parsing, it uses the de facto solutions most developers would use.
And you can see the results align closely to developers' expectations, but now quantified. Developers know C++ is faster than Python. OK, but by how much? Oh, 4 orders of magnitude? That's significant for basic operations. Something can be taken away from that.
Good benchmarks do this. They use defaults, they don't game anything, and they approach usage the way most users would.