This is a HTTP middleware for server-side application layer caching, ideal for Golang REST APIs.
It is simple, super fast, thread safe and gives the possibility to choose the adapter (memory, Redis, DynamoDB etc).
The memory adapter minimizes GC overhead to near zero and supports some options of caching algorithms (LRU, MRU, LFU, MFU). This way, it is able to store plenty of gigabytes of responses, keeping great performance and being free of leaks.
go get github.com/victorspringer/http-cache
This is an example of use with the memory adapter:
package main
import (
"fmt"
"net/http"
"os"
"time"
"github.com/victorspringer/http-cache"
"github.com/victorspringer/http-cache/adapter/memory"
)
func example(w http.ResponseWriter, r *http.Request) {
w.Write([]byte("Ok"))
}
func main() {
memcached, err := memory.NewAdapter(
&memory.Config{
Algorithm: memory.LRU,
Capacity: 10,
},
)
if err != nil {
fmt.Println(err)
os.Exit(1)
}
cacheClient, err := cache.NewClient(
&cache.Config{
Adapter: memcached,
ReleaseKey: "opn",
TTL: 10 * time.Minute,
},
)
if err != nil {
fmt.Println(err)
os.Exit(1)
}
handler := http.HandlerFunc(example)
http.Handle("/", cacheClient.Middleware(handler))
http.ListenAndServe(":8080", nil)
}
Example of Client initialization with Redis adapter:
import (
"github.com/victorspringer/http-cache"
"github.com/victorspringer/http-cache/adapter/redis"
)
...
ringOpt := &redis.RingOptions{
Addrs: map[string]string{
"server": ":6379",
},
}
cacheClient := cache.NewClient(
&cache.Config{
Adapter: redis.NewAdapter(ringOpt),
ReleaseKey: "opn",
TTL: 10 * time.Minute,
},
)
...
The benchmarks were based on allegro/bigache tests and used to compare it with the http-cache memory adapter.
The tests were run using an Intel i5-2410M with 8GB RAM on Arch Linux 64bits.
The results are shown below:
cd adapter/memory/benchmark
go test -bench=. -benchtime=10s ./... -timeout 30m
BenchmarkHTTPCacheMamoryAdapterSet-4 5000000 343 ns/op 172 B/op 1 allocs/op
BenchmarkBigCacheSet-4 3000000 507 ns/op 535 B/op 1 allocs/op
BenchmarkHTTPCacheMamoryAdapterGet-4 20000000 146 ns/op 0 B/op 0 allocs/op
BenchmarkBigCacheGet-4 3000000 343 ns/op 120 B/op 3 allocs/op
BenchmarkHTTPCacheMamoryAdapterSetParallel-4 10000000 223 ns/op 172 B/op 1 allocs/op
BenchmarkBigCacheSetParallel-4 10000000 291 ns/op 661 B/op 1 allocs/op
BenchmarkHTTPCacheMemoryAdapterGetParallel-4 50000000 56.1 ns/op 0 B/op 0 allocs/op
BenchmarkBigCacheGetParallel-4 10000000 163 ns/op 120 B/op 3 allocs/op
http-cache writes are slightly faster and reads are much more faster.
cache=http-cache go run benchmark_gc_overhead.go
Number of entries: 20000000
GC pause for http-cache memory adapter: 2.445617ms
cache=bigcache go run benchmark_gc_overhead.go
Number of entries: 20000000
GC pause for bigcache: 7.43339ms
http-cache memory adapter takes way less GC pause time, that means smaller GC overhead.
- Develop DynamoDB adapter
- Develop MongoDB adapter
http-cache is released under the MIT License.