Constructor and Description |
---|
ResponseException(javax.ws.rs.core.Response.StatusType statusType,
DruidAggregationQuery<?> druidQuery,
Throwable error)
Deprecated.
In order to ensure correct serialization of the Druid Query, an ObjectWriter with all appropriate
configuration should be passed in to the constructor
|
ResponseException(javax.ws.rs.core.Response.StatusType statusType,
DruidAggregationQuery<?> druidQuery,
Throwable error,
com.fasterxml.jackson.databind.ObjectWriter writer)
Class constructor with throwable, other parameters and a mapper for serializing the druid query.
|
Modifier and Type | Method and Description |
---|---|
DruidAggregationQuery<?> |
DruidQueryBuilder.buildQuery(DataApiRequest request,
TemplateDruidQuery template)
Build a druid query object from an API request and it's templateDruidQuery.
|
Modifier and Type | Method and Description |
---|---|
Stream<Column> |
DruidResponseParser.buildSchemaColumns(DruidAggregationQuery<?> druidQuery)
Produce the schema-defining columns for a given druid query.
|
Modifier and Type | Class and Description |
---|---|
class |
TemplateDruidQuery
Template Druid Query.
|
Modifier and Type | Method and Description |
---|---|
default SimplifiedIntervalList |
VolatileIntervalsService.getVolatileIntervals(DruidAggregationQuery<?> query,
PhysicalTable factSource)
Deprecated.
Exists solely for backwards compatibility.
VolatileIntervalsService.getVolatileIntervals(Granularity, List, PhysicalTable) should be used instead |
Modifier and Type | Interface and Description |
---|---|
interface |
DruidAggregationQuery<Q extends DruidAggregationQuery<? super Q>>
Common interface for Druid Query classes.
|
Modifier and Type | Class and Description |
---|---|
class |
AbstractDruidAggregationQuery<Q extends AbstractDruidAggregationQuery<? super Q>>
Base class for druid aggregation queries.
|
class |
GroupByQuery
Druid groupBy query.
|
class |
LookbackQuery
Druid lookback query.
|
class |
TimeSeriesQuery
Druid timeseries query.
|
class |
TopNQuery
Druid topN query.
|
class |
WeightEvaluationQuery
Query to generate weight to evaluate the query.
|
Modifier and Type | Method and Description |
---|---|
default DruidAggregationQuery<?> |
DruidAggregationQuery.getInnermostQuery() |
Modifier and Type | Method and Description |
---|---|
default Optional<? extends DruidAggregationQuery> |
DruidAggregationQuery.getInnerQuery() |
Optional<? extends DruidAggregationQuery> |
LookbackQuery.getInnerQuery() |
Modifier and Type | Method and Description |
---|---|
SimplifiedIntervalList |
LookbackQuery.LookbackQueryRequestedIntervalsFunction.apply(DruidAggregationQuery<?> druidAggregationQuery) |
static long |
WeightEvaluationQuery.getWorstCaseWeightEstimate(DruidAggregationQuery<?> query)
Evaluate Druid query for worst possible case expensive aggregation that could bring down Druid.
|
static WeightEvaluationQuery |
WeightEvaluationQuery.makeWeightEvaluationQuery(DruidAggregationQuery<?> query)
Evaluate Druid query for expensive aggregation that could bring down Druid.
|
Constructor and Description |
---|
WeightEvaluationQuery(DruidAggregationQuery<?> query,
int weight)
Generate a query that calculates the even weight of the response cardinality of the given query.
|
Modifier and Type | Method and Description |
---|---|
Optional<Long> |
SegmentIntervalsHashIdGenerator.getSegmentSetId(DruidAggregationQuery<?> query) |
Optional<T> |
QuerySigningService.getSegmentSetId(DruidAggregationQuery<?> query)
Return an identifier that corresponds to the set of segments that a query references.
|
Modifier and Type | Class and Description |
---|---|
class |
SqlAggregationQuery
Wrapper around an
DruidAggregationQuery which always reports
itself as a DefaultQueryType.GROUP_BY . |
Modifier and Type | Method and Description |
---|---|
String |
DruidQueryToSqlConverter.buildSqlQuery(DruidAggregationQuery<?> druidQuery,
ApiToFieldMapper apiToFieldMapper)
Builds the druid query as sql and returns it as a string.
|
protected static Map<String,Function<String,Number>> |
SqlResultSetProcessor.getAggregationTypeMapper(DruidAggregationQuery<?> druidQuery)
Creates a map from each aggregation name, i.e.
|
protected List<org.apache.calcite.rex.RexNode> |
DruidQueryToSqlConverter.getAllGroupByColumns(org.apache.calcite.tools.RelBuilder builder,
DruidAggregationQuery<?> druidQuery,
ApiToFieldMapper apiToFieldMapper,
String timestampColumn)
Collects all the time columns and dimensions to be grouped on.
|
protected List<org.apache.calcite.tools.RelBuilder.AggCall> |
DruidQueryToSqlConverter.getAllQueryAggregations(org.apache.calcite.tools.RelBuilder builder,
DruidAggregationQuery<?> druidQuery,
ApiToFieldMapper apiToFieldMapper)
Find all druid aggregations and convert them to
RelBuilder.AggCall . |
protected org.apache.calcite.rex.RexNode |
DruidQueryToSqlConverter.getAllWhereFilters(org.apache.calcite.tools.RelBuilder builder,
DruidAggregationQuery<?> druidQuery,
ApiToFieldMapper apiToFieldMapper,
String timestampColumn)
Returns the RexNode used to filter the druidQuery.
|
protected Collection<org.apache.calcite.rex.RexNode> |
DruidQueryToSqlConverter.getHavingFilter(org.apache.calcite.tools.RelBuilder builder,
DruidAggregationQuery<?> druidQuery,
ApiToFieldMapper apiToFieldMapper)
Gets the collection of having filters to be applied from the druid query.
|
protected int |
DruidQueryToSqlConverter.getLimit(DruidAggregationQuery<?> druidQuery)
Gets the number of rows to limit results to for a Group by Query.
|
protected List<org.apache.calcite.rex.RexNode> |
DruidQueryToSqlConverter.getSort(org.apache.calcite.tools.RelBuilder builder,
DruidAggregationQuery<?> druidQuery,
ApiToFieldMapper apiToFieldMapper,
String timestampColumn)
Finds the sorting for a druid query.
|
Constructor and Description |
---|
SqlAggregationQuery(DruidAggregationQuery<?> query)
Wraps a query as a GroupBy Query.
|
SqlResultSetProcessor(DruidAggregationQuery<?> druidQuery,
ApiToFieldMapper apiToFieldMapper,
com.fasterxml.jackson.databind.ObjectMapper objectMapper,
SqlTimeConverter sqlTimeConverter)
Builds something to process a set of sql results and return them as the
same format as a GroupBy query to Druid.
|
Modifier and Type | Method and Description |
---|---|
org.apache.calcite.rex.RexNode |
SqlTimeConverter.buildTimeFilters(org.apache.calcite.tools.RelBuilder builder,
DruidAggregationQuery<?> druidQuery,
String timestampColumn)
Builds the time filters to only select rows that occur within the intervals of the query.
|
org.joda.time.DateTime |
SqlTimeConverter.getIntervalStart(int offset,
String[] recordValues,
DruidAggregationQuery<?> druidQuery)
Given an array of strings (a row from a
ResultSet ) and the
Granularity used to make groupBy statements on time, it will parse out a DateTime
for the row which represents the beginning of the interval it was grouped on. |
Constructor and Description |
---|
DataSourceConstraint(DataApiRequest dataApiRequest,
DruidAggregationQuery<?> templateDruidQuery)
Constructor.
|
Modifier and Type | Method and Description |
---|---|
static Set<String> |
TableUtils.getColumnNames(DataApiRequest request,
DruidAggregationQuery<?> query)
Get the schema column names from the dimensions and metrics.
|
static Set<String> |
TableUtils.getColumnNames(DataApiRequest request,
DruidAggregationQuery<?> query,
PhysicalTable table)
Deprecated.
in favor of getColumnNames(DataApiRequest, DruidAggregationQuery) returning dimension api name
|
static Stream<Dimension> |
TableUtils.getDimensions(DataApiRequest request,
DruidAggregationQuery<?> query)
Get a stream returning all the fact store dimensions.
|
Modifier and Type | Method and Description |
---|---|
protected SuccessCallback |
WeightCheckRequestHandler.buildSuccessCallback(RequestContext context,
DataApiRequest request,
DruidAggregationQuery<?> druidQuery,
ResponseProcessor response,
long queryRowLimit)
Build a callback which continues the original request or refuses it with an HTTP INSUFFICIENT_STORAGE (507)
status based on the cardinality of the requester 's query as measured by the weight check query.
|
protected String |
CacheV2RequestHandler.getKey(DruidAggregationQuery<?> druidQuery)
Construct the cache key.
|
protected String |
CacheRequestHandler.getKey(DruidAggregationQuery<?> druidQuery)
Construct the cache key.
|
boolean |
SqlRequestHandler.handleRequest(RequestContext context,
DataApiRequest request,
DruidAggregationQuery<?> druidQuery,
ResponseProcessor response)
Handles a request by detecting if it's a sql backed table and sending to a sql backend.
|
boolean |
TopNMapperRequestHandler.handleRequest(RequestContext context,
DataApiRequest request,
DruidAggregationQuery<?> druidQuery,
ResponseProcessor response) |
boolean |
WeightCheckRequestHandler.handleRequest(RequestContext context,
DataApiRequest request,
DruidAggregationQuery<?> druidQuery,
ResponseProcessor response) |
boolean |
PaginationRequestHandler.handleRequest(RequestContext context,
DataApiRequest request,
DruidAggregationQuery<?> druidQuery,
ResponseProcessor response) |
boolean |
AsyncWebServiceRequestHandler.handleRequest(RequestContext context,
DataApiRequest request,
DruidAggregationQuery<?> druidQuery,
ResponseProcessor response) |
boolean |
SplitQueryRequestHandler.handleRequest(RequestContext context,
DataApiRequest request,
DruidAggregationQuery<?> druidQuery,
ResponseProcessor response) |
boolean |
EtagCacheRequestHandler.handleRequest(RequestContext context,
DataApiRequest request,
DruidAggregationQuery<?> druidQuery,
ResponseProcessor response) |
boolean |
WebServiceHandler.handleRequest(RequestContext context,
DataApiRequest request,
DruidAggregationQuery<?> druidQuery,
ResponseProcessor response) |
boolean |
WebServiceSelectorRequestHandler.handleRequest(RequestContext context,
DataApiRequest request,
DruidAggregationQuery<?> druidQuery,
ResponseProcessor response) |
boolean |
VolatileDataRequestHandler.handleRequest(RequestContext context,
DataApiRequest request,
DruidAggregationQuery<?> druidQuery,
ResponseProcessor response) |
boolean |
CacheV2RequestHandler.handleRequest(RequestContext context,
DataApiRequest request,
DruidAggregationQuery<?> druidQuery,
ResponseProcessor response) |
boolean |
DataRequestHandler.handleRequest(RequestContext context,
DataApiRequest request,
DruidAggregationQuery<?> druidQuery,
ResponseProcessor response)
Handle the response, passing the request down the chain as necessary.
|
boolean |
CacheRequestHandler.handleRequest(RequestContext context,
DataApiRequest request,
DruidAggregationQuery<?> druidQuery,
ResponseProcessor response) |
boolean |
DruidPartialDataRequestHandler.handleRequest(RequestContext context,
DataApiRequest request,
DruidAggregationQuery<?> druidQuery,
ResponseProcessor response) |
boolean |
PartialDataRequestHandler.handleRequest(RequestContext context,
DataApiRequest request,
DruidAggregationQuery<?> druidQuery,
ResponseProcessor response) |
boolean |
DebugRequestHandler.handleRequest(RequestContext context,
DataApiRequest request,
DruidAggregationQuery<?> druidQuery,
ResponseProcessor response) |
boolean |
DateTimeSortRequestHandler.handleRequest(RequestContext context,
DataApiRequest request,
DruidAggregationQuery<?> druidQuery,
ResponseProcessor response) |
WebServiceHandler |
WebServiceHandlerSelector.select(DruidAggregationQuery<?> druidQuery,
DataApiRequest request,
RequestContext context)
Select which web service to use, based on the request information.
|
WebServiceHandler |
DefaultWebServiceHandlerSelector.select(DruidAggregationQuery<?> druidQuery,
DataApiRequest request,
RequestContext context) |
Modifier and Type | Method and Description |
---|---|
ResultSet |
ResultSetResponseProcessor.buildResultSet(com.fasterxml.jackson.databind.JsonNode json,
DruidAggregationQuery<?> druidQuery,
org.joda.time.DateTimeZone dateTimeZone)
Build a result set using the api request time grain.
|
HttpErrorCallback |
CacheV2ResponseProcessor.getErrorCallback(DruidAggregationQuery<?> druidQuery) |
HttpErrorCallback |
ResponseProcessor.getErrorCallback(DruidAggregationQuery<?> query)
Callback for handling http errors.
|
HttpErrorCallback |
ResultSetResponseProcessor.getErrorCallback(DruidAggregationQuery<?> druidQuery) |
HttpErrorCallback |
DruidPartialDataResponseProcessor.getErrorCallback(DruidAggregationQuery<?> druidQuery) |
HttpErrorCallback |
SplitQueryResponseProcessor.getErrorCallback(DruidAggregationQuery<?> druidQuery) |
HttpErrorCallback |
EtagCacheResponseProcessor.getErrorCallback(DruidAggregationQuery<?> druidQuery) |
HttpErrorCallback |
WeightCheckResponseProcessor.getErrorCallback(DruidAggregationQuery<?> druidQuery) |
HttpErrorCallback |
CachingResponseProcessor.getErrorCallback(DruidAggregationQuery<?> druidQuery) |
FailureCallback |
CacheV2ResponseProcessor.getFailureCallback(DruidAggregationQuery<?> druidQuery) |
FailureCallback |
ResponseProcessor.getFailureCallback(DruidAggregationQuery<?> query)
Callback handler for unexpected failures.
|
FailureCallback |
ResultSetResponseProcessor.getFailureCallback(DruidAggregationQuery<?> druidQuery) |
FailureCallback |
DruidPartialDataResponseProcessor.getFailureCallback(DruidAggregationQuery<?> druidQuery) |
FailureCallback |
SplitQueryResponseProcessor.getFailureCallback(DruidAggregationQuery<?> druidQuery) |
FailureCallback |
EtagCacheResponseProcessor.getFailureCallback(DruidAggregationQuery<?> druidQuery) |
FailureCallback |
WeightCheckResponseProcessor.getFailureCallback(DruidAggregationQuery<?> druidQuery) |
FailureCallback |
CachingResponseProcessor.getFailureCallback(DruidAggregationQuery<?> druidQuery) |
HttpErrorCallback |
MappingResponseProcessor.getStandardError(rx.subjects.Subject responseEmitter,
DruidAggregationQuery<?> druidQuery)
Get the standard error callback.
|
FailureCallback |
MappingResponseProcessor.getStandardFailure(rx.subjects.Subject responseEmitter,
DruidAggregationQuery<?> druidQuery)
Get the standard failure callback.
|
void |
CacheV2ResponseProcessor.processResponse(com.fasterxml.jackson.databind.JsonNode json,
DruidAggregationQuery<?> druidQuery,
LoggingContext metadata) |
void |
ResponseProcessor.processResponse(com.fasterxml.jackson.databind.JsonNode json,
DruidAggregationQuery<?> query,
LoggingContext metadata)
Process the response json and respond to the original web request.
|
void |
ResultSetResponseProcessor.processResponse(com.fasterxml.jackson.databind.JsonNode json,
DruidAggregationQuery<?> druidQuery,
LoggingContext metadata) |
void |
DruidPartialDataResponseProcessor.processResponse(com.fasterxml.jackson.databind.JsonNode json,
DruidAggregationQuery<?> query,
LoggingContext metadata)
If status code is 200, do the following
Extract uncoveredIntervalsOverflowed from X-Druid-Response-Context inside the JsonNode passed into
DruidPartialDataResponseProcessor::processResponse, if it is true, invoke error response saying limit
overflowed,
Extract uncoveredIntervals from X-Druid-Response-Contex inside the JsonNode passed into
DruidPartialDataResponseProcessor::processResponse,
Parse both the uncoveredIntervals extracted above and allAvailableIntervals extracted from the union of
all the query's datasource's availabilities from DataSourceMetadataService into SimplifiedIntervalLists,
Compare both SimplifiedIntervalLists above, if allAvailableIntervals has any overlap with
uncoveredIntervals, invoke error response indicating druid is missing some data that are we are expecting
to exists.
|
void |
SplitQueryResponseProcessor.processResponse(com.fasterxml.jackson.databind.JsonNode json,
DruidAggregationQuery<?> druidQuery,
LoggingContext metadata) |
void |
EtagCacheResponseProcessor.processResponse(com.fasterxml.jackson.databind.JsonNode json,
DruidAggregationQuery<?> druidQuery,
LoggingContext metadata) |
void |
WeightCheckResponseProcessor.processResponse(com.fasterxml.jackson.databind.JsonNode json,
DruidAggregationQuery<?> druidQuery,
LoggingContext metadata) |
void |
CachingResponseProcessor.processResponse(com.fasterxml.jackson.databind.JsonNode json,
DruidAggregationQuery<?> druidQuery,
LoggingContext metadata) |
Constructor and Description |
---|
SplitQueryResponseProcessor(ResponseProcessor next,
DataApiRequest request,
DruidAggregationQuery<?> druidQuery,
Map<org.joda.time.Interval,AtomicInteger> expectedIntervals,
RequestLog logCtx)
Constructor.
|
Modifier and Type | Method and Description |
---|---|
WeightEvaluationQuery |
QueryWeightUtil.makeWeightEvaluationQuery(DruidAggregationQuery<?> druidQuery)
Get the weight check query for the given query.
|
boolean |
QueryWeightUtil.skipWeightCheckQuery(DruidAggregationQuery<?> query)
Indicate if the weight check query can be skipped based on heuristics.
|
Copyright © 2016–2018 Yahoo! Inc.. All rights reserved.