Memory leakage in getRowid
fishpadak opened this issue · 5 comments
Hi all.
What we're facing is RSS keeps on increasing in our services.
It only occurs when we re-using all DB connections via sql.SetMaxIdleConns
; there is no memory growth unless re-use DB conns.
We're suspecting somewhere in go-oci8, because memleak reports that huge porition of memory is occupied by OCIAttrGet
.
$ sudo ./memleak -p 18468
[00:42:52] Top 10 stacks with outstanding allocations:
...
...
184020624 bytes in 134 allocations from stack
kpummapg+0x6c [libclntsh.so.19.1]
kgh_invoke_alloc_cb+0xa2 [libclntsh.so.19.1]
kghgex+0xa99 [libclntsh.so.19.1]
kghfnd+0x188 [libclntsh.so.19.1]
kghalo+0x12f6 [libclntsh.so.19.1]
kghgex+0x251 [libclntsh.so.19.1]
kghfnd+0x188 [libclntsh.so.19.1]
kghalo+0x12f6 [libclntsh.so.19.1]
kpuhhaloV1+0x1ae [libclntsh.so.19.1]
kpugattr+0x1791 [libclntsh.so.19.1]
_cgo_a88c4d1ba6ca_Cfunc_OCIAttrGet+0x2a [nbase-cdc-vanilla]
[unknown]
We found that there are total 9 points that calls OCIGetAttr
, all of the points has no memory issue except for getRowid()
. (in statement.go
) In getRowid()
, it looks like rowidP
that are allocated via OCIDescriptorAlloc
is never freed until the connection it holds is closed.
There have been memory leaks fixed in older versions. What version are you using?
We updated go-oci8 version to latest master (b5e671b), but it doesn't help.
RSS still keeps on increasing as below.
Most of memory is still occupied by OCIAttrGet
, according to memleak
...
59767512 bytes in 40 allocations from stack
kpummapg+0x6c [libclntsh.so.19.1]
kgh_invoke_alloc_cb+0xa2 [libclntsh.so.19.1]
kghgex+0xa99 [libclntsh.so.19.1]
kghfnd+0x188 [libclntsh.so.19.1]
kghalo+0x12f6 [libclntsh.so.19.1]
kghgex+0x251 [libclntsh.so.19.1]
kghfnd+0x188 [libclntsh.so.19.1]
kghalo+0x12f6 [libclntsh.so.19.1]
kpuhhaloV1+0x1ae [libclntsh.so.19.1]
kpugattr+0x1791 [libclntsh.so.19.1]
_cgo_1a75c0a1c64d_Cfunc_OCIAttrGet+0x2a [nbase-cdc]
[unknown]
77592936 bytes in 53 allocations from stack
kpummapg+0x6c [libclntsh.so.19.1]
kgh_invoke_alloc_cb+0xa2 [libclntsh.so.19.1]
kghgex+0xa99 [libclntsh.so.19.1]
kghfnd+0x188 [libclntsh.so.19.1]
kghalo+0x12f6 [libclntsh.so.19.1]
kghgex+0x251 [libclntsh.so.19.1]
kghfnd+0x188 [libclntsh.so.19.1]
kghalo+0x12f6 [libclntsh.so.19.1]
kpuhhaloV1+0x1ae [libclntsh.so.19.1]
kpugattr+0x1791 [libclntsh.so.19.1]
_cgo_1a75c0a1c64d_Cfunc_OCIAttrGet+0x2a [nbase-cdc]
Lines 671 to 693 in b5e671b
We added defer C.OCIDescriptorFree(*rowidP, C.OCI_DTYPE_ROWID)
in getRowid()
, and then RSS has stopped increasing.
But I'm not sure it's a correct way to fix it, or has any side effect.
here is reproduce code.
package main
import (
"database/sql"
"flag"
"fmt"
"os"
"time"
_ "github.com/mattn/go-oci8"
"github.com/pkg/errors"
"github.com/shirou/gopsutil/process"
)
var (
dsn string
numWorkers = 16
)
func init() {
flag.StringVar(&dsn, "dsn", "", "")
flag.Parse()
if dsn == "" {
panic("empty dsn")
}
}
func main() {
db, err := sql.Open("oci8", dsn)
if err != nil {
panic(err)
}
defer db.Close()
if err := db.Ping(); err != nil {
panic(err)
}
db.SetMaxIdleConns(numWorkers)
ch := make(chan int, 4096)
errCh := make(chan error, numWorkers)
for i := 0; i < numWorkers; i++ {
go func() {
errCh <- run(db, ch)
}()
}
go func() {
ticker := time.NewTicker(time.Second)
defer ticker.Stop()
for {
select {
case <-ticker.C:
fmt.Printf("rss: %d MB\n", getRss()/(1024*1024))
}
}
}()
for j := 0; ; j++ {
select {
case ch <- j:
case err := <-errCh:
fmt.Printf("fail to run, err: %v", err)
os.Exit(1)
}
}
}
var (
insert = `INSERT INTO dongyun_memtest3(a,b,c) VALUES(:1,'b',0)`
delete = `DELETE FROM dongyun_memtest3 WHERE a=:1`
)
func run(db *sql.DB, ch <-chan int) error {
for {
select {
case v := <-ch:
_, err := db.Exec(insert, v)
if err != nil {
return errors.Wrap(err, "fail to insert")
}
_, err = db.Exec(delete, v)
if err != nil {
return errors.Wrap(err, "fail to delete")
}
}
}
}
func getRss() uint64 {
pid := os.Getpid()
p, err := process.NewProcess(int32(pid))
if err != nil {
panic(err)
}
info, err := p.MemoryInfo()
if err != nil {
panic(err)
}
return info.RSS
}
RSS keeps growing
rss: 53 MB
rss: 57 MB
rss: 63 MB
..
rss: 495 MB
rss: 499 MB
Thank you for your help!