Over the past 2 weeks, our MQTT service has been experiencing latencies and intermittencies when publishing data or creating connections to do so. This has resulted, in some cases, in data loss, and in a diminished perception of quality of service.
Our DevOps team, through the reports sent from users channeled through our support team, is aware of the problem. Our internal checks has pointed us of the issue was well.
Our DevOps team has been monitoring the behavior, and so far, we believe there are sudden spikes of connections that causes the intermittencies. The team has:
- Established more aggressive restrictions with respect to connections per IP.
- Established a lower rate limit of connection per seconds per IP
These 2 changes have improved the issue, but not fixed it completely.
As of the time of these note (05/08/2024 16:38 UTC), we're implementing a more robust and detailed log that allows us to trace the networking and usage per client, with the aim of find the direct cause of the spike. This is allow us to determine paths to implemente a definitive solution.
We will keep updating this incident as more information becomes available.
Monitoring: We continue to experiences instances where data, although published and an ACK is sent back, is not routed correctly into the database.
As such, we have implemented new measures that aim at balancing connections better, and making sure that customers, because of possible bad MQTT client implementations, don't overload the broker with unused connections.
Specifically, our DevOps team has:
- Introduced an authentication rate limit, that is, a token cannot send more than 4 auth messages per second.
- Implemented a max MQTT connections per user. This means a particular customer cannot create more than X amount of MQTT connections. X has been set based on number of devices, license and current usage.
- Deployed what's know as "Sticky connection"s, which ensure that an established session is always routed through the same balancing server as long as the connection is still alive. Along with this, instead of a single MQTT balancing server, there are now 3.
10 Oct 2024 12:21:50 (1 month ago)
Monitoring: This week, after a several deep dives in our MQTT broker logs, our DevOps team focused their efforts in troubleshooting one particular point in the MQTT data reception stack: the internal HTTP webhook that interfaces the broker we the internal data ingestion queues.
We found that webhook timeout and connection pool size configuration was playing an important role in ensuring data reception. With that:
- We increased the internal HTTP webhook timeout
- We increased the maximum connection pool size for every MQTT node.
After seeing a positive impact in the alerts occurrence rate, the DevOps team took the following measures, on top of the above 2, to verify the behavior:
- Disconnected 1 of 3 balancers. We had added one when testing if the problem came from a connections overload.
- Stopped routing a portion of the traffic to a separate MQTT broker. This was deployed to minimize the load in the deployment.
- Increased the number of the HTTP-service pods receiving the webhook requests from the MQTT broker.
In summary, these actions have meant a substantial reduction in data loss. We now only see very sporadical alerts, but after detailed monitoring of client data, we are no longer seeing data gaps.
We will continue to monitor the stability of the MQTT data reception service for further tuning.
20 Sep 2024 21:40:31 (2 months ago)
Monitoring: After the balancer servers fine-tuning and the MQTT Flapping detect mechanism activation last Friday (September 6th), our internal checks still detected failures to deliver data over MQTT, and similar reports were found from test implemented by our support team. Nonetheless, the occurrences of alerts have been decreasing with each measure our DevOps has taken.
As our goal is to provide stability and make sure data isn't lost, we continue to implement changes to completely mitigate this MQTT intermittencies. With that, today we have:
- Deployed an additional load balancer.
- Increased the number of pods (containers) running the MQTT ingestion services.
9 Sep 2024 20:55:48 (2 months ago)
Identified: Our DevOps team has taken the following additional measures to lower the MQTT intermittencies, although they're still present:
- Fine tuning of the servers running our load balancers to support greater concurrent connections.
- Enabled a feature in the MQTT broker that automatically detects client disconnections, which at the same time speaks of the connection rate, in a time window. If a threshold is exceeded during the window, the particular MQTT client is banned for a configurable duration.
6 Sep 2024 23:40:39 (3 months ago)
Identified: After the implementation of the detailed log, we were able to spot that significant traffic reaching our servers came from inactive users. The traffic was not being rejected directly in our load balancers; instead, it was being allowed to connect and published data.
We have now block said traffic completely, ensuring only active customer are able to connect. This prevent overloading the MQTT servers with invalid traffic.
So far, the internal alerts have decreased considerably, but there are still remains.
Our team continues investigating what else is causing the remaining alerts and MQTT intermittencies.
5 Sep 2024 21:59:19 (3 months ago)
Investigating: Over the past 2 weeks, our MQTT service has been experiencing latencies and intermittencies when publishing data or creating connections to do so. This has resulted, in some cases, in data loss, and in a diminished perception of quality of service.
Our DevOps team, through the reports sent from users channeled through our support team, is aware of the problem. Our internal checks has pointed us of the issue was well.
Our DevOps team has been monitoring the behavior, and so far, we believe there are sudden spikes of connections that causes the intermittencies. The team has:
- Established more aggressive restrictions with respect to connections per IP.
- Established a lower rate limit of connection per seconds per IP
These 2 changes have improved the issue, but not fixed it completely.
As of the time of these note (05/08/2024 16:38 UTC), we're implementing a more robust and detailed log that allows us to trace the networking and usage per client, with the aim of find the direct cause of the spike. This is allow us to determine paths to implemente a definitive solution.
We will keep updating this incident as more information becomes available.
5 Sep 2024 16:38:57 (3 months ago)