How to run Spark on kubernetes in jupyterhub
akoshel
Posted on October 20, 2022
This is a basic tutorial on how to run Spark in client mode from jupyterhub notebook.
All required files are presented here https://github.com/akoshel/spark-k8s-jupyterhub
Motivation
I found a lot of tutorials on this topic and almost all of them have custom spark and jupyterhub deployment. So I decided to minimize custom configuration and use raw open-source solutions as it is possible.
Install minikube & helm
Firstly we should create k8s infrastructure
Minikube installation instruction https://minikube.sigs.k8s.io/docs/start/
Helm installation instruction https://helm.sh/docs/intro/install/
Make local docker images available from minikube:
eval $(minikube docker-env)
Install spark
Let's install spark locally.
Further, we will build a spark image and run the spark-pi example with spark-submit
sudo apt-get -y install openjdk-8-jdk-headless
wget https://downloads.apache.org/spark/spark-3.2.2/spark-3.2.2-bin-hadoop3.2.tgz
tar xvf spark-3.2.2-bin-hadoop3.2.tgz
sudo mv spark-3.2.2-bin-hadoop3.2 /opt/spark
Build spark image
Spark has kubernetes dockerfile. Let's build spark image
cat /opt/spark/kubernetes/dockerfiles/spark/Dockerfile
cd /opt/spark
docker build -t spark:latest -f kubernetes/dockerfiles/spark/Dockerfile .
Spark base image does not support python. So we should build pyspark image (opt/spark/spark-3.2.2-bin-hadoop3.2/kubernetes/dockerfiles/spark/bindings/python/Dockerfile).
The basic image does not support s3a and postgres. That is why maven jars should be added.
See modified image here https://github.com/akoshel/spark-k8s-jupyterhub/blob/main/pyspark.Dockerfile
Build pyspark image
cd /opt/spark
docker build -t pyspark:latest -f kubernetes/dockerfiles/spark/bindings/python/Dockerfile .
Run spark-pi
Before running examples namespace, service account, role and rolebinding should be deployed.
kubectl apply -f spark_namespace.yaml
kubectl apply -f spark_sa.yaml
kubectl apply -f spark_sa_role.yaml
Now we are ready to check the spark-pi example using spark-submit
(Use kubectl cluster-info to find your master address)
/opt/spark/bin/spark-submit \
--master k8s://https://192.168.49.2:8443 \
--deploy-mode cluster \
--driver-memory 1g \
--conf spark.kubernetes.memoryOverheadFactor=0.5 \
--name sparkpi-test1 \
--class org.apache.spark.examples.SparkPi \
--conf spark.kubernetes.container.image=spark:latest \
--conf spark.kubernetes.driver.pod.name=spark-test1-pi \
--conf spark.kubernetes.namespace=spark \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \
--verbose \
local:///opt/spark/examples/jars/spark-examples_2.12-3.2.1.jar 1000
Check logs
kubectl logs -n spark spark-test1-pi | grep "Pi is roughly"
Pi is roughly 3.1416600314166003
Great! spark is running on k8s.
Install jupyterhub
Before jupyterhub installation service account, role and rolebinding should be deployed in jupyterhub namespace
kubectl apply -f jupyterhub_sa.yaml
kubectl apply -f jupyterhub_sa_role.yaml
Spark executors have to be deployed in spark namespace from a notebook which is deployed in jupyterhub.
That is why we have to deploy driver service. (driver_service.yaml)
kubectl apply -f driver_service.yaml
To get access to spark UI ingress should be deployed
kubectl apply -f driver_ingress.yaml
Java is not installed in the default jupyterhub singleuser image.
Build modified singleuser image.
docker build -f singleuser.Dockerfile -t singleuser:v1 .
See jhub_values.yaml. There are the following modifications: new image, service account and resources.
Now we are ready to deploy jupyterhub
helm upgrade --cleanup-on-fail \
--install jupyterhub jupyterhub/jupyterhub \
--namespace jupyterhub \
--create-namespace \
--version=2.0.0 \
--values jhub_values.yaml
The easiest way to get access to jupyterhub is port-forwarding from the proxy pod. Alternatively you can configure ingress in jhub_values.yaml
kubectl port-forward proxy-dd5964d5b-6lkwp -n jupyterhub 8000:8000 # Set your pod name
Pyspark from jupyterhub
Open jupyterhub in your browser http://localhost:8000/
Create a jupyterhub terminal and install pyspark version that matches spark version in the image
pip install pyspark==3.2.2
Create notebook
Create SparkContext
from pyspark import SparkConf, SparkContext
conf = (SparkConf().setMaster("k8s://https://192.168.49.2:8443") # Your master address name
.set("spark.kubernetes.container.image", "pyspark:latest") # Spark image name
.set("spark.driver.port", "2222") # Needs to match svc
.set("spark.driver.blockManager.port", "7777")
.set("spark.driver.host", "driver-service.jupyterhub.svc.cluster.local") # Needs to match svc
.set("spark.driver.bindAddress", "0.0.0.0")
.set("spark.kubernetes.namespace", "spark")
.set("spark.kubernetes.authenticate.driver.serviceAccountName", "spark")
.set("spark.kubernetes.authenticate.serviceAccountName", "spark")
.set("spark.executor.instances", "2")
.set("spark.kubernetes.container.image.pullPolicy", "IfNotPresent")
.set("spark.app.name", "tutorial_app"))
Run spark application
# Calculate the approximate sum of values in the dataset
t = sc.parallelize(range(10))
r = t.sumApprox(3)
print('Approximate sum: %s' % r)
Approximate sum: 45.0
See executor pods
kubectl get pods -n spark
NAME READY STATUS RESTARTS AGE
tutorial-app-d63d4c83e68ed465-exec-1 1/1 Running 0 16s
tutorial-app-d63d4c83e68ed465-exec-2 1/1 Running 0 15s
Congratulations! Pyspark in client mode is running from jupyterhub
Further steps:
- Configure your spark config
- Configure jupyterhub https://z2jh.jupyter.org/en/stable/jupyterhub/customization.html
- Install spark operator https://googlecloudplatform.github.io/spark-on-k8s-operator/docs/quick-start-guide.html
Recources:
- https://spark.apache.org/docs/latest/running-on-kubernetes.html
- https://z2jh.jupyter.org/
- https://scalingpythonml.com/2020/12/21/running-a-spark-jupyter-notebooks-in-client-mode-inside-of-a-kubernetes-cluster-on-arm.html
- https://oak-tree.tech/blog/spark-kubernetes-jupyter
P.S. My second post about spark on k8s
Optimize spark on kubernetes
https://dev.to/akoshel/optimize-spark-on-kubernetes-32la
Posted on October 20, 2022
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.