<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Haproxy on brtkwr.com</title><link>https://brtkwr.com/tags/haproxy/</link><description>Recent content in Haproxy on brtkwr.com</description><generator>Hugo</generator><language>en</language><lastBuildDate>Thu, 14 May 2026 10:00:00 +0000</lastBuildDate><atom:link href="https://brtkwr.com/tags/haproxy/index.xml" rel="self" type="application/rss+xml"/><item><title>Closing HA gaps in a redis-sentinel + haproxy setup</title><link>https://brtkwr.com/posts/2026-05-14-closing-ha-gaps-in-redis-sentinel-haproxy/</link><pubDate>Thu, 14 May 2026 10:00:00 +0000</pubDate><guid>https://brtkwr.com/posts/2026-05-14-closing-ha-gaps-in-redis-sentinel-haproxy/</guid><description>&lt;p>&lt;strong>TL;DR:&lt;/strong> A common haproxy-in-front-of-redis-sentinel setup will cascade-restart its haproxy pods during sentinel failover if the kubernetes liveness probe checks backend health. The probes need splitting: liveness reflects process aliveness, readiness reflects backend availability. I also had to add a &lt;code>preStop&lt;/code> hook that runs &lt;code>kill -USR1 1&lt;/code> so haproxy can drain existing connections before kubelet sends &lt;code>SIGTERM&lt;/code>, which is a hard stop in haproxy 3.x.&lt;/p>
&lt;h2 id="the-setup">
 The setup
 &lt;a class="heading-link" href="#the-setup">
 &lt;i class="fa-solid fa-link" aria-hidden="true" title="Link to heading">&lt;/i>
 &lt;span class="sr-only">Link to heading&lt;/span>
 &lt;/a>
&lt;/h2>
&lt;p>A small redis high-availability cluster:&lt;/p></description></item></channel></rss>