11.8. Deploying GeoMesa Spark with Jupyter Notebook

Jupyter Notebook is a web-based application for creating interactive documents containing runnable code, visualizations, and text. Via the Apache Toree kernel, Jupyter can be used for preparing spatio-temporal analyses in Scala and submitting them in Spark. The guide below describes how to configure Jupyter with Spark 2.4.x, 3.0.x or 3.1.x, Scala 2.11, and GeoMesa.


GeoMesa support for PySpark provides access to GeoMesa Accumulo data stores through the Spark Python API using Jupyter’s built in Python kernel. See GeoMesa PySpark.

11.8.1. Prerequisites

Spark 2.4.x, 3.0.x or 3.1.x should be installed, and the environment variable SPARK_HOME should be set. Spark 2.0 and above requires Scala version 2.11.

Python 2.7 or 3.x should be installed, and it is recommended that Jupyter and Toree are installed inside a Python virtualenv or inside a conda environment.

11.8.2. Installing Jupyter

Jupyter may be installed via pip (for Python 2.7) or pip3 (for Python 3.x):

$ pip install --upgrade jupyter


$ pip3 install --upgrade jupyter

11.8.3. Installing the Toree Kernel

$ pip install --upgrade toree


$ pip3 install --upgrade toree

11.8.4. Configure Toree and GeoMesa

If you have the GeoMesa Accumulo distribution installed at GEOMESA_ACCUMULO_HOME as described in Setting up the Accumulo Command Line Tools, you can run the following example script to configure Toree with GeoMesa version VERSION:


# bundled GeoMesa Accumulo Spark and Spark SQL runtime JAR
# (contains geomesa-accumulo-spark, geomesa-spark-core, geomesa-spark-sql, and dependencies)

# uncomment to use the converter RDD provider

# uncomment to work with shapefiles (requires $GEOMESA_ACCUMULO_HOME/bin/install-shapefile-dependencies.sh)

jupyter toree install \
    --replace \
    --user \
    --kernel_name "GeoMesa Spark $VERSION" \
    --spark_home=${SPARK_HOME} \
    --spark_opts="--master yarn --jars $jars"


The JARs specified will be in the respective target directory of each module of the source distribution if you built GeoMesa from source.


You may wish to change --spark_opts to specify the number and configuration of your executors; otherwise the values in $SPARK_HOME/conf/spark-defaults.conf or $SPARK_OPTS will be used.

You may also consider adding geomesa-tools-2.11-$VERSION-data.jar to include prepackaged converters for publicly available data sources (as described in Prepackaged Converter Definitions), geomesa-jupyter-leaflet-2.11-$VERSION.jar to include an interface for the Leaflet spatial visualization library (see Leaflet for Visualization, below), and/or geomesa-jupyter-vegas-2.11-$VERSION.jar to use the Vegas data plotting library (see Vegas for Plotting, below).

11.8.5. Running Jupyter

For public notebooks, you should configure Jupyter to use a password and bind to a public IP address (by default, Jupyter will only accept connections from localhost). To run Jupyter with the GeoMesa Spark kernel:

$ jupyter notebook


Long-lived processes should probably be hosted in screen, systemd, or supervisord.

Your notebook server should launch and be accessible at http://localhost:8888/ (or the address and port you bound the server to), potentially requiring an access token which will be shown in the server output.


All Spark code will be submitted as the user account running the Jupyter server. You may wish to look at JupyterLab for a multi-user Jupyter server.

11.8.6. Leaflet for Visualization

The following sample notebook shows how you can use Leaflet for data visualization:

classpath.add("org.locationtech.jts" % "jts" % "1.13")
classpath.add("org.locationtech.geomesa" % "geomesa-accumulo-datastore" % "1.3.0")
classpath.add("org.apache.accumulo" % "accumulo-core" % "1.6.4")
classpath.add("org.locationtech.geomesa" % "geomesa-jupyter" % "1.3.0")

import org.locationtech.geomesa.jupyter.Jupyter._

implicit val displayer: String => Unit = display.html(_)

import scala.collection.JavaConversions._
import org.locationtech.geomesa.accumulo.data.AccumuloDataStoreParams._
import org.locationtech.geomesa.utils.geotools.Conversions._

val params = Map(
        ZookeepersParam.key -> "ZOOKEEPERS",
        InstanceIdParam.key -> "INSTANCE",
        UserParam.key       -> "USER_NAME",
        PasswordParam.key   -> "USER_PASS",
        CatalogParam.key    -> "CATALOG")

val ds = org.geotools.data.DataStoreFinder.getDataStore(params)
val ff = org.geotools.factory.CommonFactoryFinder.getFilterFactory2
val fs = ds.getFeatureSource("twitter")

val filt = ff.and(
    ff.between(ff.property("dtg"), ff.literal("2016-01-01"), ff.literal("2016-05-01")),
    ff.bbox("geom", -80, 37, -75, 40, "EPSG:4326"))
val features = fs.getFeatures(filt).features.take(10).toList

                       Circle(-78.0,38.0,1000,  StyleOptions(color="yellow",fillColor="#63A",fillOpacity=0.5)),
                       Circle(-78.0,45.0,100000,StyleOptions(color="#0A5" ,fillColor="#63A",fillOpacity=0.5)),
../../_images/jupyter-leaflet.png Adding Layers to a Map and Displaying in the Notebook

The following snippet is an example of rendering dataframes in Leaflet in a Jupyter notebook:

implicit val displayer: String => Unit = { s => kernel.display.content("text/html", s) }

val function = """
function(feature) {
  switch (feature.properties.plane_type) {
    case "A388": return {color: "#1c2957"}
    default: return {color: "#cdb87d"}

val sftLayer = time { L.DataFrameLayerNonPoint(flights_over_state, "__fid__", L.StyleOptionFunction(function)) }
val apLayer = time { L.DataFrameLayerPoint(flyovers, "origin", L.StyleOptions(color="#1c2957", fillColor="#cdb87d"), 2.5) }
val stLayer = time { L.DataFrameLayerNonPoint(queryOnStates, "ST", L.StyleOptions(color="#1c2957", fillColor="#cdb87d", fillOpacity= 0.45)) }
displayer(L.render(Seq[L.GeoRenderable](sftLayer,stLayer,apLayer),zoom = 1, path = "path/to/files"))
../../_images/jupyter-leaflet-layer.png StyleOptionFunction

This case class allows you to specify a Javascript function to perform styling. The anonymous function that you will pass takes a feature as an argument and returns a Javascript style object. An example of styling based on a specific property value is provided below:

function(feature) {
  switch(feature.properties.someProp) {
    case "someValue": return { color: "#ff0000" }
    default         : return { color: "#0000ff" }

The following table provides options that might be of interest:






Stroke color



Stroke width in pixels



Stroke opacity



Fill color



Fill opacity

Note: Options are comma-separated (i.e. { color: "#ff0000", fillColor: "#0000ff" })

11.8.7. Vegas for Plotting

The Vegas library may be used with GeoMesa, Spark, and Toree in Jupyter to plot quantitative data. The geomesa-jupyter-vegas module builds a shaded JAR containing all of the dependencies needed to run Vegas in Jupyter+Toree. This module must be built from source, using the vegas profile:

$ mvn clean install -Pvegas -pl geomesa-jupyter/geomesa-jupyter-vegas

This will build geomesa-jupyter-vegas_2.11-$VERSION.jar in the target directory of the module, and should be added to the list of JARs in the jupyter toree install command described in Configure Toree and GeoMesa:

# then continue with "jupyter toree install" as before

To use Vegas within Jupyter, load the appropriate libraries and a displayer:

import vegas._
import vegas.render.HTMLRenderer._
import vegas.sparkExt._

implicit val displayer: String => Unit = { s => kernel.display.content("text/html", s) }

Then use the withDataFrame method to plot data in a DataFrame:

Vegas("Simple bar chart").
  encodeX("a", Ordinal).
  encodeY("b", Quantitative).