[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Writing a Druid extension

I was experimenting with a Druid extension prototype and encountered some difficulties. The experiment is to build something like with gRPC. 

(1) Guava version

Druid relies on 16.0.1 which is a very old version (~4 years). My only guess is another transitive dependency (Hadoop?) requires it. The earliest version used by gRPC from three years ago was 19.0. So my first question is if there are any plans for upgrading Guava any time soon.

(2) Druid thread model for query execution

I played a little with calling org.apache.druid.server.QueryLifecycleFactory::runSimple under debugger. The stack trace was rather deep to reverse engineer easily so I'd like to ask directly instead. Would it be possible to briefly explain how many threads (and from which thread pool) it takes on a broker node to process, say, a GroupBy query. 

At the very least I'd like to know if calling QueryLifecycleFactory::runSimple on a thread from some "query processing pool" is better than doing it on the IO thread that received the query.

(3) Yielder

Is it safe to assume that QueryLifecycleFactory::runSimple always returns a Yielder<> ? QueryLifecycle omits generic types rather liberally when dealing with Sequence instances.

(4) Calcite integration

Presumably Avatica has an option of using protobuf encoding for the returned results. Is it true that Druid cannot use it? 
On a related note, any chance there was something written down about org.apache.druid.sql.calcite ?

Thank you

To unsubscribe, e-mail: dev-unsubscribe@xxxxxxxxxxxxxxxx
For additional commands, e-mail: dev-help@xxxxxxxxxxxxxxxx