Skip to content
Home ยป HAProxy health checks for VMware Horizon & AppVolumes

HAProxy health checks for VMware Horizon & AppVolumes

A while ago I wrote a blog post about using HAProxy and Keepalived to make VMware Horizon connection servers and AppVolumes managers high available. The load balancing config used in that post was a basic one who just checked if the connection servers or appvolumes managers were running by checking if the webserver on the servers responded. The drawback of this is when you set a connection server “disabled”, the webserver is still responding, so HAProxy will still be redirecting connections to that server, although they will get blocked on the server itself. Same for the AppVolumes managers, if there’s an error (e.g. database connection lost) the webserver will still be running, but AppVolumes won’t work anymore.

So I looked into the details about how to get the correct status of the backend servers and to make HAProxy use that status in stead of just checking if we get a reply from the webserver.

VMware Horizon

For VMware Horizon, the supported way to check if a connection server is available is to check the <connnection-server-url/favicon.ico. This will return a status code 200 if the connection server is available and ready to accept connections. If the connection server is (administratively) disabled, it will return a status code 503. This works on version 7.7 and later. On earlier versions, the “disabled” status of a connection server is not reflected in this check.

With HAProxy we have the ability to check this and put a backend server in maintenance mode once the returned status code is anything but 200. For this we have to adjust the HAProxy config from the earlier post as follows:

...
backend horizon
  mode tcp
  option ssl-hello-chk
  balance source

  option httpchk HEAD /favicon.ico
  server cs1 192.168.1.21:443 weight 1 check check-ssl verify none inter 30s fastinter 2s downinter 5s rise 3 fall 3
  server cs2 192.168.1.22:443 weight 1 check check-ssl verify none inter 30s fastinter 2s downinter 5s rise 3 fall 3
...

Once a connection server is disabled, HAProxy will put the backend server correctly in maintenance mode now:

Apart from getting a more accurate status from the connection servers, VMware also has some recommendations on the check frequency so you don’t overload the connection servers with health checks. VMware recommends to schedule the health checks every 30 seconds and to set a timeout of 91 seconds (3 times a health check + 1 second). This results in the following HAProxy configuration:

...
frontend horizon-https
  mode tcp
  bind 192.168.1.20:443
  timeout client 91s
  default_backend horizon
backend horizon
  mode tcp
  option ssl-hello-chk
  balance source

  option httpchk HEAD /favicon.ico
  timeout server 91s
  server cs1 192.168.1.21:443 weight 1 check check-ssl verify none inter 30s fastinter 2s rise 5 fall 2
  server cs2 192.168.1.22:443 weight 1 check check-ssl verify none inter 30s fastinter 2s rise 5 fall 2
...

Finally, the last optimization I’ve done, is to change the load balancing method from “source” to “least connection”. However, we must make sure that source persistence is enabled so that a client doesn’t switch connection servers between 2 connections because that would force the client to re-login to the other connection server. In HAProxy we can create an IP table and tell HAproxy to keep connections “sticky” based on the IP address. When configuring the IP table, you have to specify an expiry timer. To calculate this timer, you have to know what options you’ve setup in Horizon for the global settings “Forcibly Disconnect Users” and “Disconnect Applications and Discard SSO credentials for idle users”

The expiry timer for the IP table should be set to 1/3 of the smallest of both of the above settings or if both are set to “never” then it should be set to 6 hours and 40 minutes.

To accomplish this we have to change/add following lines to the HAProxy config:

backend horizon
  mode tcp
  option ssl-hello-chk
  balance leastconn
  stick-table type ip size 1m expire 200m
  stick on src
  option httpchk HEAD /favicon.ico
  timeout server 91s
  server cs1 192.168.1.21:443 weight 1 check check-ssl verify none inter 30s fastinter 2s rise 5 fall 2
  server cs2 192.168.1.22:443 weight 1 check check-ssl verify none inter 30s fastinter 2s rise 5 fall 2
...

So the complete HAProxy config for Horizon looks as follows:

### Horizon Connection servers ###
frontend horizon-http
  mode http
  bind 192.168.1.20:80
  # Redirect http to https
  redirect scheme https if !{ ssl_fc }

frontend horizon-https
  mode tcp
  bind 192.168.1.20:443
  timeout client 91s
  default_backend horizon
backend horizon
  mode tcp
  option ssl-hello-chk
  balance leastconn
  stick-table type ip size 1m expire 200m
  stick on src
  option httpchk HEAD /favicon.ico
  timeout server 91s
  server cs1 192.168.1.21:443 weight 1 check check-ssl verify none inter 30s fastinter 5s rise 5 fall 2
  server cs2 192.168.1.22:443 weight 1 check check-ssl verify none inter 30s fastinter 5s rise 5 fall 2
######

VMware AppVolumes 4.x

Also the HAProxy config for AppVolumes can be optimized in a similar way. The recommended health check for AppVolumes 4.x is to check the <appvolumes>/health_check url to get the correct status of the appvolumes managers. The recommended interval setting for the health check is 30 seconds and the recommended timeout value is 10 seconds. As in the Horizon config, we’ll change the load balance method to “least connections” and create a sticky IP table to provide source persistence.

The new and optmized HAProxy config for AppVolumes 4.x is as follows:

### AppVolume Managers ###
frontend appvol-http
  mode http
  bind 192.168.1.30:80
  redirect scheme https if !{ ssl_fc }
frontend appvol-https
  mode tcp
  bind 192.168.1.30:443
  timeout client 10s
  default_backend appvol

backend appvol
  mode tcp
  option ssl-hello-chk
  balance leastconn
  stick-table type ip size 1m expire 200m
  stick on src
  option httpchk HEAD /health_check
  timeout server 10s
  server avm1 192.168.1.31:443 weight 1 check check-ssl verify none inter 30s fastinter 5s rise 5 fall 2
  server avm2 192.168.1.32:443 weight 1 check check-ssl verify none inter 30s fastinter 5s rise 5 fall 2
######

Conclusion

The above HAProxy configuration is an optimized version of the HAProxy configuration posted earlier in the post “VMware Horizon/AppVolumes LB with HAProxy and Keepalived on PhotonOS“. It now uses the VMware recommended settings for load balancing VMware Horizon and AppVolumes 4.x.
Remember to update the HAPorxy config file on both load balancers if you’ve setup a high available HAProxy config using Keepalived!

Sources:

Monitoring health of Horizon Connection Server using Load Balancer, timeout, Load Balancer persistence settings in Horizon 7.x and 8 (56636) (vmware.com)

VMware AppVolumes 4.x load balancing

9 thoughts on “HAProxy health checks for VMware Horizon & AppVolumes”

    1. Frank, that’s indeed the wrong health check url. I’ve updated it now.
      Thanks for letting me know.

  1. Where did you get 6 hours and 40 minutes if both of those are set to never? a guide for workspace one access would be amazing

      1. What is the difference between Heartbeat Interval (set-last-user-activity) and Load Balancer Persistence in the linked article? It seems like Load Balancer persistenance should be the stick table time but you indicate that its the heartbeat interval.

  2. Hi Lalit,
    The UAG appliances have a built-in option to enable High Availability. Just go into system settings and enable HA and give both your appliances the same ID.

  3. Hi Michiel,

    Great article, It helps me alot in my home lab build. Can you please help me to loadbalance the UAG with haproxy.

  4. Pingback: VMware Horizon/AppVolumes LB with HAProxy and Keepalived on PhotonOS – MickeyByte IT Pro Blog

Leave a Reply

Your email address will not be published. Required fields are marked *