Real-time data fetching and visualizing with Python & Grafana

Aliyev Nurlan

2022.12.15

It is required to fetch real-time readings from a weather station located in Budapest and visualize the data using Grafana software. There are various ways to tackle this problem but one of the efficient ways is to automate this task using Python. Fortunately, we can access API point, which returns current readings as JSON, then we can handle the data later on.

These events have their unique timestamps, thus we’ll be using InfluxDB, a timeseries based database, as the main database system. In this case, Python is preferred programming language to automate the tasks of fetching and uploading the data to InfluxDB. As the final stage, we’ll be using Grafana to visualize the data in the ways we want to see.

Below is the general workflow:

  • Fetch data from the API point and store the response,

  • Parse the returned JSON file to Flux annotated CSV file,

  • Write the saved CSV file to InfluxDB

  • Automate running the Python script

Prerequisites:

  • Install influx-cli from here,
  • Configure it as in Figure 1 (Erased places include private tokens and server address)
Figure 1

Figure 1

  • Have Python installed, set up InfluxDB and Grafana on the server
  1. Data Fetching

    We will be using ≪urlopen≫ and ≪json≫ modules

Here as the first stage, we are opening the URL address of the API point to get the returned JSON and assign it in a variable. Then we treat the data as JSON using the json.loads() method (Figure 2.).

Figure 2

Figure 2

Next, we are destructing the JSON response and picking the readings to store them into variables and get them into a list (Figure 3.).

Figure 3

Figure 3

As there can be “None” values in the readings because of faulty sensors, we have an extra safety phase to turn values to 0. (This part is optional, Figure 4.)

Figure 4

Figure 4

  1. Data Parsing

This phase requires some boilerplate code as InfluxDB requires Flux annotation.

Here we are configuring the headers with required annotations (Figure 5.). In the following phase, we destruct the list into variables and then write read data as rows into the main list.

Figure 5

Figure 5

This operation needs to be done for all of the readings (Figure 6.). After this phase we create the CSV file, write the headers and the data list, then save the file in the directory (Figure 7.).

Figure 6

Figure 6

Figure 7

Figure 7

  1. Writing CSV file to InfluxDB

Finally, we have an annotated CSV file to write to our database. There are couple of things to pay attention:

  • The path to “influx-cli” which is installed and configured is required
  • The file path of CSV file we have created, which needs to be set once is required
  • It is preferred to use “/” (forward slash) instead of “\” (backward slash) as the directory separator of the paths
  • OS module needs to be imported
  • Bucket name must be set
  • This solution is to be used for Windows OS only

The function which handles data writing is fairly simple, we inject CMD command into python file and let it run automatically (Figure 8.).

Figure 8

Figure 8

At this point, we have finished the main logic of the application, if we run the script with the rest of the Python code added we get the result (Figure 9.).

Figure 9

Figure 9

  1. Automating

In order to run this script automatically we have chosen the most accessible one, Windows Task Scheduler. One of the caveats is that the machine which runs this script needs to be always on to function as a server.

It is fairly simple to create a task in Task Scheduler, first we right click “Task Scheduler (Local)” and select “Create Task…”. On the opened window we have couple of places to be filled.

4.1. Giving our task a name

We can put whatever name we want to, but for clarity we have chosen “fetch_upload_data_to_influxdb” (Figure 10.)

Figure 10

Figure 10

4.2. Next, we add a trigger in the “Triggers” panel. We set it to happen Daily with repeating the task every 10 mins for indefinite amount of time (Figure11.).

Figure 11

Figure 11

4.3. Final step is to add an action from the “Actions” panel which will open/run our script. The path to the script must be provided by the user (Figure 12.).

Figure 12

Figure 12

  1. Grafana

After these operations we are ready to query the data and visualize it with customizable dashboards (Figure 13.).

Figure 13

Figure 13