19.2. Kafka Data Store Parameters¶
The Kafka data store differs from most data stores in that the data set is kept entirely in memory. Because of this, the in-memory indexing can be configured at runtime through data store parameters. See Kafka Index Configuration for more information on the available indexing options.
Because configuration options can reference attributes from a particular SimpleFeatureType, it may be necessary to create multiple Kafka data store instances when dealing with multiple schemas.
Use the following parameters for a Kafka data store (required parameters are marked with *):
Parameter |
Type |
Description |
|---|---|---|
|
String |
Kafka bootstrap servers, e.g. |
|
String |
The Kafka topic used to store schema metadata, defaults to |
|
String |
Configuration options for kafka producer, in Java properties format. See Producer Configs |
|
Boolean |
Send a ‘clear’ message on startup. This will cause clients to drop any data that was in the topic prior to startup |
|
String |
Configuration options for kafka consumer, in Java properties format. See Consumer Configs |
|
String |
On start up, read messages that were written within this time frame (vs ignore old messages), e.g.
|
|
Integer |
Number of kafka consumers used per feature type. Set to 0 to disable consuming (i.e. producer mode) |
|
String |
How often to commit offsets for the consumer group, by default |
|
String |
Prefix to use for kafka group ID, to more easily identify particular data stores |
|
Boolean |
Start consuming from a topic only when the feature type is first accessed, defaults to |
|
Integer |
Number of partitions to use when creating new kafka topics |
|
Integer |
Replication factor to use when creating new kafka topics |
|
Boolean |
Instead of deleting the Kafka topic when a schema is deleted, mark all messages on the topic as deleted but preserve the topic |
|
String |
Internal serialization format to use for kafka messages. Must be one of |
|
String |
Expire features from in-memory cache after this delay, e.g. |
|
String |
Expire features dynamically based on CQL predicates. See Feature Expiration |
|
String |
Instead of message time, determine expiry based on feature data. See Feature Event Time |
|
Boolean |
Instead of message time, determine feature ordering based on the feature event time. See Feature Event Time |
|
String |
Use CQEngine-based attribute indices for the in-memory feature cache. See CQEngine Indexing |
|
Integer |
Number of bins in the x-dimension of the spatial index, by default 360. See Spatial Index Resolution |
|
Integer |
Number of bins in the y-dimension of the spatial index, by default 180. See Spatial Index Resolution |
|
String |
Number and size of tiers used for indexing geometries with extents, in the form |
|
Boolean |
Use lazy deserialization of features. This may improve processing load at the expense of slightly slower query times |
|
String |
Additional views on existing schemas to expose as layers. See Layer Views for details |
|
String |
Specify the type of registry used to publish metrics. Must be one of |
|
String |
Override the default registry config. See Data Store Metrics for configuration details. |
|
Boolean |
Use loose bounding boxes, which offer improved performance but are not exact |
|
Boolean |
Audit incoming queries. By default audits are written to a log file |
|
String |
Comma-delimited superset of authorizations that will be used for queries. See Reading Visibility Labels for details |
|
String |
Class name for an |
19.2.1. Zookeeper (deprecated)¶
Historically, the Kafka data store persisted schema information in Zookeeper. However, since Kafka has deprecated (in 3.x) and then removed (in 4.x) support for Zookeeper, GeoMesa now defaults to storing schema information in Kafka itself.
For existing schemas that are persisted in Zookeeper, the following deprecated parameters can be used:
Parameter |
Type |
Description |
|---|---|---|
|
String |
Comma-delimited list of Zookeeper URLs, e.g |
|
String |
Zookeeper discoverable path, used to namespace feature types |
See Migration from Zookeeper for details on migrating away from Zookeeper.
19.2.2. Programmatic Access¶
An instance of a Kafka data store can be obtained through the normal GeoTools discovery methods, assuming that the GeoMesa code is on the classpath.
Map<String, Serializable> parameters = new HashMap<>();
parameters.put("kafka.brokers", "localhost:9092");
org.geotools.api.data.DataStore dataStore =
org.geotools.api.data.DataStoreFinder.getDataStore(parameters);
More information on using GeoTools can be found in the GeoTools user guide.