TU Delft ENS mmWave
This serves as a guide to deploy mmWave sensors using TI IWR6843K mmWave board as developed at ENS Group at TU Delft.
Project Contributors
- Akshit Gupta
- Erwin Russel
Under the guidance of Marco Zuniga and Fernando Kuipers.
Code Organisation
Indoors
For the indoor use case, we are using the sense and direct HVAC configuration for the IWR6843K mmWave board.
- The 68xx_Sense_and_Direct_HVAC_Control contains all the desired material including algorithm details and the bin file for flashing the firmware on the board. The tutorial to flash the firmware is located here. This folder is derived from the mmWave industrial toolbox provided by TI.
Once the board is flashed with the desired firmware (Sense and Direct HVAC in this case), we can get the data in a UART stream. The remaining files are concerned with receiving the data, parsing it and pushing it to our database and dashboard.
- The sense_and_direct_68xx.cfg contains the configuration of the mmwave radar which is recommened by TI for our use case. The details about these configuration parameters is available here on our drive.
- config.ini contains configuration parameters for the influxDB including the server url where influxDB is deployed. Here, token indicates the username:password for read/write access to influxDB. The tag deviceId which in our case is used the uniquely identify the node(mmWave sensor). Hence, it should be changed with every new deployement to a self comprehensabel value in order to retrieve only measurements from the specific sensor from influxDB.
- The influxPush folder contains influxClient file which contains methods to send data to influxDB.
- The oob_parser.py is again derived from the mmWave industrial toolbox provided by TI and deals with parsing data from UART stream.
- The most important file is client.py which basically is the main file of the program and uses the config.ini, oob_parser.py and influxPush modules. This file contains mainly tow config parameters (which should be moved to config.ini file (PENDING)). The first is time interval to push data to sensor which is set to one reading every x seconds. The second parameters are the communication ports of the system to which mmwave sensor is connected. In case of Mac OSX and Raspi, these ports are already defined, however, for windows and linux, these need to be changed depending upon individual’s laptop or PC.
Outdoor
Needs updating by Erwin
Software Stack
The software stack is organised as follows.
Basically, the system follows a TIG stack, a popular architecture used in real time IoT and monitoring applications.
- The code for reading the UART stream from the board is based on Python. Details for this are already discussed in previous section.
- This data is sent to central influxDB server running on Google Cloud Platform (on port 8086). It is recommended to become familiar with the advantages of using a time series database over standard SQL (here)[]. Basically, for IoT time series data with high duty cycle, time series data makes the storage of measurements efficient.
- On the pi, telegraf is also used to push real time performace metrics to this influxDB server. The duty cycle is set to 1 reading every 30s and the the configuration file for telegraf is located at the default location (/etc/telegraf/telegraf.conf)
- For dashboard, grafana is used which is deployed on the same server running influxDB on Google Cloud Platform (on port 3000). Again, it is recommended to atleast become familiar with the advantages of making the dashboard using grafana over custom built HTML/CSS/JS and the ease of use. Overall, Telegraf-InfluxDB-Grafana support nearly seamless integration and later if needed, the real time logs, the reading of the people count and point cloud data can also be sent using telegraf using file input or tail plugin.
Step by Step to deploy a new node
On Raspberry Pi, install dependencies i.e. Python3, Telegraf To push people count data from mmWave sensor to influxDB.
git clone repo_url //change repo url
cd mmWave
nano config.ini //change deviceId to uniquely identify device
nohup python3 client.py > output.log &
To push performance characteristics, setup telegraf with below configuration.
[agent]
hostname = “EWI-1”
flush_interval = “30s”
interval = “30s”
# Input Plugins
[[inputs.cpu]]
percpu = true
totalcpu = true
collect_cpu_time = false
report_active = false
[[inputs.disk]]
ignore_fs = [“tmpfs”, “devtmpfs”, “devfs”]
[[inputs.io]]
[[inputs.mem]]
[[inputs.net]]
[[inputs.system]]
[[inputs.swap]]
[[inputs.netstat]]
[[inputs.processes]]
[[inputs.kernel]]
# Output Plugin InfluxDB
[[outputs.influxdb]]
database = “telegraf”
urls = [ “influxDb url" ]
username = “askENS”
password = “askENS”
Creating new users to access data in Grafana and InfluxDb
Grafana contains 3 types of users namely admin, editor and viewer. The admin details are shared with ENS and new users can be created on request. To create new account follow the steps here. The grafana server is running on port 3000 of the server and influxDB on port 8086.
References
- https://dev.ti.com/tirex/explore/node?node=AJoMGA2ID9pCPWEKPi16wg__VLyFKFf__LATEST
- https://docs.influxdata.com/influxdb/v1.8/
- https://www.influxdata.com/time-series-platform/telegraf/
- https://grafana.com/docs/grafana/latest/getting-started/
- https://nwmichl.net/2020/07/14/telegraf-influxdb-grafana-on-raspberrypi-from-scratch/
Support or Contact
Having trouble with any of these steps? Mail us and we’ll help you sort it out.