Integration of SWAN with Spark clusters
The current setup allows to execute PySpark operations on CERN Hadoop and Spark clusters.
This notebook illustrates the use of Spark in SWAN to analyze the monitoring data available on HDFS (analytix) and plots a heatmap of loadAvg across machines in a particular service.
Connect to the cluster (analytix)¶
To connect to a cluster, click on the star button on the top and follow the instructions
- The star button only appears if you have selected a SPARK cluster in the configuration
- The star button is active after the notebook kernel is ready
Import necessary Spark and Python dependencies¶
In [1]:
from pyspark.sql.functions import from_unixtime, when, col
from pyspark.sql.types import *
from pyspark.sql.functions import from_json
In [2]:
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
Select the data¶
This reads monitoring data stored in Hadoop
In [3]:
# Create the dataframe from the parquet files containing monitoring data
df = spark.read.parquet("hdfs://analytix/project/monitoring/collectd/load/2022/10/14/")
Check the data structure¶
In [4]:
df.printSchema()
Create a temporary table view¶
In [5]:
df.createOrReplaceTempView("loadAvg")
Do the heavylifting in spark and collect aggregated view to panda DF¶
In [6]:
# Extract the data running a query using Spark.
# Fetch the results into a Pandas DataFrame for later plotting
df_loadAvg_pandas = spark.sql("""SELECT host,
avg(value) as avg,
hour(from_unixtime(timestamp / 1000, 'yyyy-MM-dd HH:mm:ss')) as hr
FROM loadAvg
WHERE submitter_hostgroup like 'swan/node/production'
AND dayofmonth(from_unixtime(timestamp / 1000, 'yyyy-MM-dd HH:mm:ss')) = 14
GROUP BY hour(from_unixtime(timestamp / 1000, 'yyyy-MM-dd HH:mm:ss')), host""").toPandas()
Visualize load heatmap¶
In [7]:
# heatmap of loadAvg
plt.figure(figsize=(12, 8))
ax = sns.heatmap(df_loadAvg_pandas.pivot(index='host', columns='hr', values='avg'), cmap="Blues")
ax.set_title("Heatmap of loadAvg for cluster on 2022/10/14", fontsize=20)
Out[7]:
Create a histogram of uptime for the monitored entities¶
In [8]:
# create the dataframe
df = spark.read.parquet("hdfs://analytix/project/monitoring/collectd/uptime/2022/10/14/")
In [9]:
# create temporary view
df.createOrReplaceTempView("uptime")
In [10]:
# Extract the data running a query using Spark.
# Fetch the results into a Pandas DataFrame
df_uptime_pandas = spark.sql("""SELECT host, round(max(value)/60/60/24) as days
FROM uptime
WHERE dayofmonth(from_unixtime(timestamp / 1000, 'yyyy-MM-dd HH:mm:ss')) = 14
AND hour(from_unixtime(timestamp / 1000, 'yyyy-MM-dd HH:mm:ss')) = 12
GROUP BY host""").toPandas()
In [11]:
# visualize with seaborn
# histogram of uptime (time since last reboot)
plt.figure(figsize=(12, 8))
ax = sns.histplot(df_uptime_pandas['days'], kde=False, color='red', bins=range(0, 1800, 20))
ax.set_title("Histogram of uptime", fontsize=20)
ax.set_yscale('log')
In [ ]:
spark.stop()
In [ ]: