Saturday 30 December 2017

Moving a ClickOnce Installation

A while ago I needed to move a ClickOnce installation to a new URL based on an update I was performing. Now, anyone who is familiar with ClickOnce knows you can migrate an application to a new URL inside Visual Studio by setting up the new "Update Location" in the settings dialog below:

In the past, I have bookmarked and followed Robin's guide on how to move a ClickOnce deployment using this standard method. Robin's blog is a one-stop shop for any technical information regarding ClickOnce.

However, guess what? It turns out this method does not work if you want to change the CPU type of the application (which is what I wanted to do) or if your certificate has already expired. I wanted to change my application from AnyCPU to x86. Apparently there is nothing you can do about this because the process architecture setting is part of the ClickOnce deployment manifest.

Basically I was stuck!!

And then I found this stack overflow post about uninstalling a click once application silently. The accepted answer on there links to a project on Github. It's a small project that was used by the Wunderlist app. The project is great and the code silently finds the un-install process from the registry and silently executes it in the background of the application.

I used this to un-install my ClickOnce app and then kick-off the installer for the new version of the app at a second location. It's so useful, if I ever need to deploy another ClickOnce application, I think I will include this code by default, and wire it up to the database.

Problems

However, there were some users that the code failed for. It turns out (for reasons I won't go into) that some of the applications had different names!! This tripped up the finding of the uninstaller code. So I forked the GitHub repository and added a function to find the uninstaller by the ClickOnce application URL. The code I added can be found here and my pull request is here.

Final Solution

Using this library, my final solution for this problem looked something like this:

private void MigrateToNewUrl()
{
    try
    {
        // Step 1: Get Uninstaller Location
        var location = "file://someserver/somedirectory/application/appname.application";
        var uninstallInfo = UninstallInfo.FindByInstallerUrl(location);
        
        if (uninstallInfo == null)
        {
            MessageBox.Show("Could not find application to uninstall");
            return;
        }

        // Step 2: Start Silent Uninstall Process
        var uninstaller = new Uninstaller();
        uninstaller.Uninstall(uninstallInfo);

        // Step 3: Start Install of new ClickOnce deployment
        Process.Start(@"\\someserver2\somedirectory2\application\appname.application");
        Application.Exit();
    }
    catch (Exception ex)
    {
        Logger.LogException(ex, MethodBase.GetCurrentMethod().Name, SystemInformation.UserName);
    }
}

Upgrading from the database

The above method can then be called from the MainForm like so:

private void MainForm_Load(object sender, EventArgs e)
{
    try
    {
        string sql = "select [value] from appsettings where name = 'MigrateApp'";
        if ( m_dbConn.ExecSqlCommandScalar<int>(sql) == 1 )
        {
            sql = "select PropertyValue from AppSettings where PropertyName = 'NewLocation'";
            var newLocation = m_dbConn.ExecSqlCommandScalar<string>(sql);
            MigrateToNewUrl(newLocation);
        } 
    }
    catch (Exception ex)
    {
        Logger.LogException(ex, MethodBase.GetCurrentMethod().Name, SystemInformation.UserName);
    }
}

On start-up of the application, the above code checks the database to see if the app has been migrated to a second location and performs the un-install and setup process automatically.


Contact Me:  ocean.airdrop@gmail.com

Tuesday 5 December 2017

Load balancing TCP/IP traffic using HAProxy/Nginx

Despite all our best efforts, problems can and do happen in production. If you are practicing CI/CD and continually pushing new code to a server, bugs can sometimes creep in.

When that happens you want to be able to inspect, debug and even step-through the code to identify and fix the problem.

However, depending on the type of problems they might be hard to reproduce on a dev box. Some problems only occur when you get real live data flowing through the system. For example, if the problem is to do with incoming TCP/IP connections from a specific set of IoT devices, this can be hard to reproduce in dev.

Of course, it's considered bad form to debug the code live on the production server. If you are connecting up to an application in debug mode, you're effectively stopping all clients communicating with the server while you step through the code.

Not good form! And let's not mention leaving a break-point on and then going for lunch!

In this scenario, what you really want to do is divert a small subset of traffic to a test/staging server for analysis, so you can step through the code in debug-mode.

In this circumstance, you need a reverse proxy server / load balancer in front of your servers to be able to redirect traffic. There are lots of proxy servers out there but HAProxy and Nginx are popular choices. For example HAProxy is used by StackOverflow and Github which means it's been heavily tested in the field.

HAProxy

Using HAProxy, it's relatively easy to setup a rule that forwards a single IP address to a test/development server for you to debug the code. In the diagram below the yellow device's traffic gets singled out and routed to the test box by HAProxy

So how do you configure this? Well, let's start with a basic HAProxy config below. As a minimum, the config needs to supply a frontend section and a backend section. In the example below, the frontend section listens on port 23 for incoming TCP/IP connections. When it gets an incoming connection, it uses the servers defined in my backend servers to route traffic behind the proxy server

# HAProxy
frontend my_proxy_server
bind 192.168.137.100:23
mode tcp
default_backend my_app_servers

backend my_app_servers
mode tcp
balance roundrobin
server app1 192.168.137.101:23
server app2 192.168.137.102:23

Nice and simple!

To route a specific IP address to a different server you need to use the access control list command (acl). It looks like this:

# HAProxy
frontend my_proxy_server
bind 192.168.137.100:23
mode tcp
default_backend my_app_servers

acl test_sites src 192.168.100.1 192.168.100.2
use_backend my_test_server if test_sites 

backend my_app_servers
mode tcp
balance roundrobin
server app1 192.168.137.101:23
server app2 192.168.137.102:23

backend my_test_servers
mode tcp
server app1 192.168.137.103:23

The two key lines are:

acl test_sites src 192.168.100.1 192.168.100.2
use_backend my_test_server if test_sites 

acl test_sites src 192.168.100.1 192.168.100.2 sets up a acl rule named test_sites. It is activated when a client connects to HAProxy with the IPaddress of 192.168.100.1 or 192.168.100.2

If the acl rule is true, the second line use_backend my_test_server if test_sites uses the my_test_servers block which diverts all traffic to the test server 192.168.137.103

Also nice and simple!

Useful HAProxy Commands

  • sudo apt-get install haproxy Install haproxy
  • haproxy -v This command checks haproxy is up and running
  • sudo haproxy -c -f /etc/haproxy/haproxy.cfg After you have made changes to the config file, this command checks the file to make sure its valid.
  • sudo service haproxy restart This command restarts haproxy

HAProxy Gotcha!

As good as HAProxy is, there is a gotcha. And HAProxy's gotcha is that it does not proxy UDP traffic. It's a TCP/HTTP load balancer. This is a shame. But there is light at the end of the tunnel. If you want to port forward/proxy UDP traffic you might want to check our Nginx which fully supports UDP aswell as TCP.

So what about Nginx?

As mentioned, Nginx can also be used as a reverse proxy for tcp/udp traffic. The config file is similar to HAProxy in that it is split up into two ends. Upstream and server with the server section listening on a port and directing the traffic to a upstream block. Here is an example of what I first wrote while playing with Nginx based on tutorials on the web.

# Warning. This doesn't work because you cant use the "if" statement in a stream context!
stream {
    upstream prod_backend {
        server 192.168.137.129:23;
    }
    upstream test_backend {
        server 192.168.137.131:23;
    }

    server {
        listen 23;

        # This line will complain with the error 
        if ( $remote_addr = 192.168.137.132 ) {
            proxy_pass test_backend;
        }

        proxy_pass prod_backend;
    }
}

That's nice and simple. There is only one thing. The above config file doesn't work!!. It turns out that you cannot use the if statement in a stream context! It works just fine in a http context but not a stream context.

The alternative solution is to use the map statement instead like this:

# This version works!
stream {
   upstream prod_backend {
      server 192.168.137.129:23;
   }

   upstream test_backend {
      server 192.168.137.131:23;
   }

   map $remote_addr $backend_svr {
      192.168.137.140 "test_backend";
      default "prod_backend";
   }

   server {
      listen 23;
      proxy pass $backend_svr;
   }
}

In the above config, the map function takes the nginx built-in variable $remote_addr and compares it to a list of lookups in the block. When it finds a match it sets the variable $backend_svr equal to the right-hand-side. So, when the ip address is set to 192.168.137.140, the $backend_svr variable gets set to test_backend which is used as the upstream backend

This works and now we are back to the same implementation as the haproxy version.

Useful Nginx Commands

  • sudo apt-get install Nginx Install Nginx
  • sudo nginx -vGet version of Nginx
  • sudo nano /etc/nginx/nginx.confEdit the Nginx config file
  • sudo /etc/init.d/nginx startStart Nginx
  • sudo /etc/init.d/nginx stopStop Nginx
  • sudo /etc/init.d/nginx restartRestart Nginx
  • systemctl status nginx.servicecheck status of nginx service

Nginx UDP Gotcha!

Even though Nginx supports UDP load balancing there is also a gotcha!!! It doesn't perform session persistence out of the box. This means that protocols like OpenVPN will not work as it uses a persistent channel. During my testing I could see a new session for every packet that came over the wire. Session persistence is available in Nginx Plus which is an expensive paid for version of nginx. At some point this feature might trickle down to the free version but it does not look like it has made its way down at this time of writing.

Wrapping up...

That's it. Its fairly simple to setup TCP load balancing in both HAProxy and Nginx, but there are difficulties if you have a persistent UDP protocol.


Contact Me:  ocean.airdrop@gmail.com

Popular Posts

Recent Posts

Unordered List

Text Widget

Pages