-
Notifications
You must be signed in to change notification settings - Fork 38
TableSessionPool panics under concurrent load when goroutine count exceeds maxSize Environment #159
Description
Environment
Client: github.com/apache/iotdb-client-go/v2 v2.0.3-1
Go: 1.23+ | OS: Linux | Dialect: table
What I was doing
I created a TableSessionPool with maxSize = 10 and ran a concurrent fetch test — multiple goroutines each getting a session, running a SELECT query, and closing the session. At 10 goroutines everything worked fine. As I increased the count beyond the pool's maxSize, the program panicked.
Pool config used
client.NewTableSessionPool(cfg, 10, 20000, 20000, false)
// maxSize=10, connectionTimeout=20s, waitTimeout=20sTest that triggered the panic
var wg sync.WaitGroup
for i := 0; i < 80; i++ { // 80 goroutines, pool maxSize is 10
wg.Add(1)
go func() {
defer wg.Done()
sess, _ := pool.GetSession()
defer sess.Close() // session closed here
var timeout int64 = 300000
result, _ := sess.ExecuteQueryStatement(
"SELECT * FROM device_stats ORDER BY time DESC LIMIT 100",
&timeout,
)
// result is iterated AFTER sess.Close() fires via defer
for { has, _ := result.Next(); if !has { break } }
}()
}
wg.Wait()Panic output
panic: runtime error: slice bounds out of range [:4138] with capacity 4096
goroutine 78 [running]:
bufio.(*Reader).Read(...)
/usr/local/go/src/bufio/bufio.go:258
github.com/apache/thrift/.../TFramedTransport.readFrame(...)
.../framed_transport.go:199
github.com/apache/iotdb-client-go/v2/client.(*PooledTableSession).ExecuteQueryStatement(...)
.../tablesessionpool.go:126
Additional observation — session close timing matters
I noticed the panic only occurs when the SessionDataSet is iterated after the session is closed. If I fully process the result set before closing the session, the error does not occur:
// ❌ Panics under concurrency — result iterated after session closes
sess, _ := pool.GetSession()
defer sess.Close() // closes when function returns
result, _ := sess.ExecuteQueryStatement(query, &timeout)
return result // caller iterates after session is gone
// ✅ Works fine — result fully consumed before session closes
sess, _ := pool.GetSession()
defer sess.Close()
result, _ := sess.ExecuteQueryStatement(query, &timeout)
rows := materialize(result) // drain fully while session is alive
return rowsClarification needed
I am not sure if this is expected behaviour — i.e. whether SessionDataSet is intentionally tied to the lifetime of the session and must be consumed before the session closes, or whether it should remain usable after the session is returned to the pool. The documentation does not mention this. Clarification on the intended usage would be helpful.
What I expected
When the pool is at maxSize, I expected GetSession() to block and wait up to waitToGetSessionTimeoutInMs for a session to become available, and return an error if the wait times out — not panic. I also expected the returned SessionDataSet to be safe to iterate after the session is closed, or at minimum for the documentation to explicitly warn that it is not.
Workaround
Two changes together eliminate the panic. First, fully consume the SessionDataSet before closing the session. Second, add an application-level semaphore sized to maxSize to prevent more concurrent sessions than the pool allows:
sem := make(chan struct{}, 10) // same as pool maxSize
func fetch() ([]row, error) {
sem <- struct{}{}
defer func() { <-sem }()
sess, err := pool.GetSession()
if err != nil { return nil, err }
defer sess.Close()
var timeout int64 = 300000
result, err := sess.ExecuteQueryStatement("SELECT * FROM ...", &timeout)
if err != nil { return nil, err }
return materialize(result) // drain fully before defer sess.Close() fires
}