I got a flux query on which it take long time:
the array variable can have 1000 values ~.
ldevname = from(bucket: v.bucket)
|> range(start: -365d)
|> filter(fn: (r) => r._measurement == "LDEV_CNF" )
|> filter(fn: (r) => r.poolid == "${pool}")
|> keep(columns: ["ldevID_cnf"])
|> group()
|> distinct(column: "ldevID_cnf")
|> findColumn(fn: (key) => true, column: "_value")
from(bucket: v.bucket)
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => r._measurement == "LDEV" and r._field == "LDEV_IOPS" )
|> filter(fn: (r) => contains(value: r.ldevID, set: ldevname))
Is-it possible to optimize this one?
Many thanks,
7 posts - 2 participants