Research checkpoint #07: more tests for the Raspberry, checking its sniffing performance
During these last weeks, I kept testing the Raspberry Pi to verify its performance as I started in the last post, but this time conducting experiments with the python libraries cited in post 04, which are capable of sniffing the network traffic and manipulating this data. Here, I report the results and bring a brief discussion about them, I think it will be useful to revisit all of these findings to explore them in more depth!
Tests
The tests were conducted using the scripts available here, which implements network traffic sniffers using the Pyshark, Scapy and Pypcap python libraries, this last one being also accompanied with the dpkt library, which is capable of extracting information from the packets. The test flow is very simple, aimed to measure the Raspberry Pi performance dealing with live traffic using these different libraries:
- Run the python script and wait 10 seconds;
- Start a transmission of packets in the network to simulate live traffic during 10 seconds;
- After the artificial traffic finished, wait more 30 seconds and end the test, collecting the results1.
All of the flow above is implemented in the bash script available in the repository cited before. Each of the tests were conducted five times with the same environment and the metrics exhibited below were based on these executions, usually taking the mean value of the numbers observed. To register each metric, I used:
To measure the CPU use, the psutil python library, that is capable to retrieve informations about the system performance, like the percentage of CPU used by a process. Its operation is similar to the
ps
command in Linux: thecpu_percent
method returns a float representing the CPU utilization as a percentage for a process since its last call using the total CPU time consumed, so this value can be bigger than 100.0 if the process is running multiple threads in different cores.To measure the memory use, the
resource
python module, which is capable of measuring the maximum resident set size used by the process in KBytes, in the same way described in the last post.The “number of packets counted” is an information calculated by the python scripts: their job is to count how many network traffic packets were sniffed and how many of them are TCP packets. This last information was collected only to perform a minimal packet parsing, using the libraries also to extract some content of the data, not being reported below as it doesn’t aggregate much information.
The “iperf3 flow data” field reports the parameters for the transmission of packets used by the iperf3 app, which is the one used to simulate live traffic. More information about its use below.
For the “total time”, the
time
command, just to be sure that the workflow defined above was being followed.
As commented above, the application used to generate network traffic is iperf3, which is a tool that can transmit packets in the client-server model allowing adjusting of many parameters. The scenarios executed were:
- Transmission of TCP packets in the localhost, with the Raspberry being the client and the server at the same time;
- Transmission of UDP packets at the bitrate of 1Mb/sec, 2Mb/sec, 3Mb/sec, 4Mb/sec, 5Mb/sec and 10Mb/sec in the localhost;
- Transmission of UDP packets without bitrate limit defined in the localhost;
- All the scenarios above but with the Raspberry being a client and my personal notebook being the server, on a private network without internet access.
The machines specs used in the tests are:
- Raspberry Pi Model 3 B
- Quad Core 1.2GHz Broadcom BCM2837 64bit CPU
- 1GB RAM
- Debian GNU/Linux 12 (bookworm) OS
- 100 Mbit/s network interface card (Fast Ethernet)
- Acer Aspire 3 A315-41-R4RB
- AMD Ryzen 5 2500U 2.0GHz 64bit CPU
- 12GB DDR4 2667 MHz RAM
- Fedora Linux 41 (Silverblue) OS
- 1000 Mbits/s network interface card (Gigabit Ethernet)
For each of the tests described above, I tested the tree libraries cited, the results of each one are reported below.
Pyshark
Test | Use of CPU (mean) | Peak of memory use (max) | N. of packets counted (mean) | iperf3 flow data | Total time (mean) |
---|---|---|---|---|---|
Localhost TCP | 80.22% | 36412 KB | 258 | 3.53 Gbits/sec, 4.11 GBytes | 53.351s |
Localhost UDP 1Mb/sec | 22.18% | 30696 KB | 108.8 | 1.02 Mbits/sec, 1.22 MBytes in 39 packets | 53.129s |
Localhost UDP 2Mb/sec | 41.3% | 30892 KB | 147.6 | 2.02 Mbits/sec, 2.41 MBytes in 77 packets | 53.151s |
Localhost UDP 3Mb/sec | 60.5% | 30696 KB | 185.2 | 3.01 Mbits/sec, 3.59 MBytes in 115 packets | 53.173s |
Localhost UDP 4Mb/sec | 79.22% | 30928 KB | 223.2 | 4.01 Mbits/sec, 4.78 MBytes in 153 packets | 53.287s |
Localhost UDP 5Mb/sec | 80.3% | 30744 KB | 214.4 | 5.01 Mbits/sec, 5.97 MBytes in 191 packets | 53.481s |
Localhost UDP 10Mb/sec | 80.74% | 30884 KB | 214.8 | 10.00 Mbits/sec, 11.9 MBytes in 382 packets | 53.447s |
Localhost UDP without limit | 81.04% | 30996 KB | 211.6 | 7.21 Gbits/sec, 8.4 GBytes in 275246 packets | 53.399s |
Remote server TCP | 79.92% | 28064 KB | 1550.6 | 94.64 Mbits/sec, 113 MBytes | 53.380s |
Remote server UDP 1Mb/sec | 41.2% | 27904 KB | 898 | 1 Mbits/sec, 1.19 MBytes in 864 packets | 53.217s |
Remote server UDP 2Mb/sec | 78.82% | 27776 KB | 1716.8 | 2.00 Mbits/sec, 2.38 MBytes in 1727 packets | 53.339s |
Remote server UDP 3Mb/sec | 79% | 27904 KB | 1725 | 3.00 Mbits/sec, 3.58 MBytes in 2590 packets | 53.379s |
Remote server UDP 4Mb/sec | 78.78% | 27904 KB | 1714.8 | 4.00 Mbits/sec, 4.77 MBytes in 3453 packets | 53.354s |
Remote server UDP 5Mb/sec | 78.88% | 27904 KB | 1717.4 | 5.00 Mbits/sec, 5.96 MBytes in 4316 packets | 53.346s |
Remote server UDP 10Mb/sec | 78.94% | 27904 KB | 1711.4 | 10.0 Mbits/sec, 11.9 MBytes in 8632 packets | 53.378s |
Remote server UDP without limit | 79.3% | 27904 KB | 1681.8 | 95.7 Mbits/sec, 114 MBytes in 82620 packets | 53.357s |
The first thing to notice above is the intense use of CPU, that were a way greater than any of the other Python libraries. And this intensive use wasn’t converted in better results: in the localhost scenario, we see that the library starts to struggle with 10 Mbits of bitrate, not taking in account all of the packets and its even worse with the remote server, failing already with 2 Mbits of bitrate. This give us some hints: the library is not suited to handle big volumes of traffic data, performing a little better with a smaller volume of bigger packets (as in the localhost scenario). However, its also worth to notice that apparently it also keeps a buffer of the unprocessed packets, as the data keeps being processed after the live flow ended, however the time given wasn’t sufficient to handle of the traffic.
Scapy
Test | Use of CPU (mean) | Peak of memory use (max) | N. of packets counted (mean) | iperf3 flow data | Total time (mean) |
---|---|---|---|---|---|
Localhost TCP | 20.1% | 76552 KB | 1568.8 | 3.61 Gbits/sec, 4,21 GBytes | 54.401s |
Localhost UDP 1Mb/sec | 1.14% | 73228 KB | 136.4 | 1.02 Mbits/sec, 1.22 MBytes in 39 packets | 54.398s |
Localhost UDP 2Mb/sec | 1.64% | 73480 KB | 217.6 | 2.02 Mbits/sec, 2.41 MBytes in 77 packets | 54.412s |
Localhost UDP 3Mb/sec | 2.26% | 74120 KB | 293.6 | 3.01 Mbits/sec, 3.59 MBytes in 115 packets | 54.396s |
Localhost UDP 4Mb/sec | 2.9% | 74636 KB | 370.4 | 4.01 Mbits/sec, 4.78 MBytes in 153 packets | 54.407s |
Localhost UDP 5Mb/sec | 3.96% | 74888 KB | 458.8 | 5.01 Mbits/sec, 5.97 MBytes in 191 packets | 54.424s |
Localhost UDP 10Mb/sec | 5.44% | 74892 KB | 812 | 10.0 Mbits/sec, 11.9 MBytes in 382 packets | 54.421s |
Localhost UDP without limit | 20.8% | 76428 KB | 1860 | 6.94 Gbits/sec, 8.08 GBytes in 273844 packets | 54.414s |
Remote server TCP | 20.26% | 67484 KB | 3552 | 94.66 Mbits/sec, 113 MBytes | 54.478s |
Remote server UDP 1Mb/sec | 7.02% | 67736 KB | 899.4 | 1.00 Mbits/sec, 1.19 MBytes in 864 packets | 54.472s |
Remote server UDP 2Mb/sec | 11.08% | 67740 KB | 1760 | 2.00 Mbits/sec, 2.38 MBytes in 1727 packets | 54.469s |
Remote server UDP 3Mb/sec | 16.04% | 67736 KB | 2626.4 | 3.00 Mbits/sec, 3.58 MBytes in 2590 packets | 54.480s |
Remote server UDP 4Mb/sec | 20.3% | 67868 KB | 3439.4 | 4.00 Mbits/sec, 4.77 MBytes in 3453 packets | 54.485s |
Remote server UDP 5Mb/sec | 20.4% | 67740 KB | 3434.6 | 5.00 Mbits/sec, 5.96 MBytes in 4316 packets | 54.474s |
Remote server UDP 10Mb/sec | 20.36% | 67992 KB | 3377.6 | 10.0 Mbits/sec, 11.9 MBytes in 8632 packets | 54.478s |
Remote server UDP without limit | 20.34% | 67736 KB | 2919.2 | 95.7 Mbits/sec, 114 MBytes in 82620 packets | 54.484s |
The CPU performance is a way more controlled and its number of packets count had an improvement in comparision to the last library: it starts to fail when using 4Mb/sec in the remote server scenario, still losing a lot of packets.
Pypcap + dpkt
Test | Use of CPU (mean) | Peak of memory use (max) | N. of packets counted (mean) | iperf3 flow data | Total time (mean) |
---|---|---|---|---|---|
Localhost TCP | 19.38% | 21720 KB | 9530 | 2.36 Gbits/sec, 2.746 GBytes | 52.229s |
Localhost UDP 1Mb/sec | 0.1% | 21336 KB | 69.8 | 1.02 Mbits/sec, 1.22 MBytes in 39 packets | 52.222s |
Localhost UDP 2Mb/sec | 0.14% | 21336 KB | 107 | 2.02 Mbits/sec, 2.41 MBytes in 77 packets | 52.239s |
Localhost UDP 3Mb/sec | 0.2% | 21336 KB | 146 | 3.01 Mbits/sec, 3.59 MBytes in 115 packets | 52.229s |
Localhost UDP 4Mb/sec | 0.2% | 21336 KB | 183.2 | 4.01 Mbits/sec, 4.78 MBytes in 153 packets | 52.233s |
Localhost UDP 5Mb/sec | 0.3% | 21336 KB | 221.8 | 5.01 Mbits/sec, 5.97 MBytes in 191 packets | 52.238s |
Localhost UDP 10Mb/sec | 0.48% | 21336 KB | 412.2 | 10.0 Mbits/sec, 11.9 MBytes in 382 packets | 52.233s |
Localhost UDP without limit | 19.4% | 21336 KB | 17037.6 | 4.05 Gbits/sec, 4.71 GBytes in 154540 packets | 52.223s |
Remote server TCP | 19.4% | 21208 KB | 73000.8 | 94,7 Mbits/sec, 113 MBytes | 52.244s |
Remote server UDP 1Mb/sec | 0.52% | 21208 KB | 895.6 | 1.00 Mbits/sec, 1.19 MBytes in 864 packets | 52.222s |
Remote server UDP 2Mb/sec | 0.92% | 21208 KB | 1758.6 | 2.00 Mbits/sec, 2.38 MBytes in 1727 packets | 52.230s |
Remote server UDP 3Mb/sec | 1.4% | 21208 KB | 2623 | 3.00 Mbits/sec, 3.58 MBytes in 2590 packets | 52.230s |
Remote server UDP 4Mb/sec | 1.92% | 21208 KB | 3486.2 | 4.00 Mbits/sec, 4.77 MBytes in 3454 packets | 52.229s |
Remote server UDP 5Mb/sec | 2.26% | 21208 KB | 4349.2 | 5.00 Mbits/sec, 5.96 MBytes in 4316 packets | 52.233s |
Remote server UDP 10Mb/sec | 3.96% | 21208 KB | 8664.6 | 10.0 Mbits/sec, 11.9 MBytes in 8632 packets | 52.234s |
Remote server UDP without limit | 18.86% | 21208 KB | 81575 | 95.7 Mbits/sec, 114 MBytes in 82620 packets | 52.235s |
And finally, our best observations: the library apparently is capable of hadling big volumes of traffic without low consume of CPU time, even without bitrate limit in the remote server scenario, with a few packets lost. Notice also that its use of memory is constant independently of the volume of data and below the another libraries.
Next steps
As defined in the post 05, our current goals are these:
- Packet Parsing
- Conduct experiments to verify the performance of Python libraries sniffing the network and processing the packages in a Raspberry Pi (different from what we did in post 4, in which we tested the lib’s with the CIC-IoT-2023 dataset and in a regular computer).
- Inference
- Run the baseline models proposed in post 02 in a Raspberry Pi to verify its performance.
- Training (centralized IDS)
- TBD.
I still want to revisit all the data reported in this current post to make more deeper analysis and also to perform more experiments, specially considering a DDoS attack scenario to see how the libraries will perform, so the packet parsing topic will still require some work :).
This post was made as a record of the progress of the research project “DDoS Detection in the Internet of Things using Machine Learning Methods with Retraining”, supervised by professor Daniel Batista, BCC - IME - USP. Project supported by the São Paulo Research Foundation (FAPESP), process nº 2024/10240-3. The opinions, hypothesis and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of FAPESP.
The pypcap library doesn’t implement a timeout mechanism that stops its execution after a given amount of time, so after the last 30 seconds (plus 2 seconds to have a safety margin) I do a simple ping to the server to generate a packet that will be out of the window of time defined manually in the code, so the loop breaks when the sniffer catches it. ↩