Saturday 30 December 2017

Moving a ClickOnce Installation

A while ago I needed to move a ClickOnce installation to a new URL based on an update I was performing. Now, anyone who is familiar with ClickOnce knows you can migrate an application to a new URL inside Visual Studio by setting up the new "Update Location" in the settings dialog below:

In the past, I have bookmarked and followed Robin's guide on how to move a ClickOnce deployment using this standard method. Robin's blog is a one-stop shop for any technical information regarding ClickOnce.

However, guess what? It turns out this method does not work if you want to change the CPU type of the application (which is what I wanted to do) or if your certificate has already expired. I wanted to change my application from AnyCPU to x86. Apparently there is nothing you can do about this because the process architecture setting is part of the ClickOnce deployment manifest.

Basically I was stuck!!

And then I found this stack overflow post about uninstalling a click once application silently. The accepted answer on there links to a project on Github. It's a small project that was used by the Wunderlist app. The project is great and the code silently finds the un-install process from the registry and silently executes it in the background of the application.

I used this to un-install my ClickOnce app and then kick-off the installer for the new version of the app at a second location. It's so useful, if I ever need to deploy another ClickOnce application, I think I will include this code by default, and wire it up to the database.

Problems

However, there were some users that the code failed for. It turns out (for reasons I won't go into) that some of the applications had different names!! This tripped up the finding of the uninstaller code. So I forked the GitHub repository and added a function to find the uninstaller by the ClickOnce application URL. The code I added can be found here and my pull request is here.

Final Solution

Using this library, my final solution for this problem looked something like this:

private void MigrateToNewUrl()
{
    try
    {
        // Step 1: Get Uninstaller Location
        var location = "file://someserver/somedirectory/application/appname.application";
        var uninstallInfo = UninstallInfo.FindByInstallerUrl(location);
        
        if (uninstallInfo == null)
        {
            MessageBox.Show("Could not find application to uninstall");
            return;
        }

        // Step 2: Start Silent Uninstall Process
        var uninstaller = new Uninstaller();
        uninstaller.Uninstall(uninstallInfo);

        // Step 3: Start Install of new ClickOnce deployment
        Process.Start(@"\\someserver2\somedirectory2\application\appname.application");
        Application.Exit();
    }
    catch (Exception ex)
    {
        Logger.LogException(ex, MethodBase.GetCurrentMethod().Name, SystemInformation.UserName);
    }
}

Upgrading from the database

The above method can then be called from the MainForm like so:

private void MainForm_Load(object sender, EventArgs e)
{
    try
    {
        string sql = "select [value] from appsettings where name = 'MigrateApp'";
        if ( m_dbConn.ExecSqlCommandScalar<int>(sql) == 1 )
        {
            sql = "select PropertyValue from AppSettings where PropertyName = 'NewLocation'";
            var newLocation = m_dbConn.ExecSqlCommandScalar<string>(sql);
            MigrateToNewUrl(newLocation);
        } 
    }
    catch (Exception ex)
    {
        Logger.LogException(ex, MethodBase.GetCurrentMethod().Name, SystemInformation.UserName);
    }
}

On start-up of the application, the above code checks the database to see if the app has been migrated to a second location and performs the un-install and setup process automatically.


Contact Me:  ocean.airdrop@gmail.com

Tuesday 5 December 2017

Load balancing TCP/IP traffic using HAProxy/Nginx

Despite all our best efforts, problems can and do happen in production. If you are practicing CI/CD and continually pushing new code to a server, bugs can sometimes creep in.

When that happens you want to be able to inspect, debug and even step-through the code to identify and fix the problem.

However, depending on the type of problems they might be hard to reproduce on a dev box. Some problems only occur when you get real live data flowing through the system. For example, if the problem is to do with incoming TCP/IP connections from a specific set of IoT devices, this can be hard to reproduce in dev.

Of course, it's considered bad form to debug the code live on the production server. If you are connecting up to an application in debug mode, you're effectively stopping all clients communicating with the server while you step through the code.

Not good form! And let's not mention leaving a break-point on and then going for lunch!

In this scenario, what you really want to do is divert a small subset of traffic to a test/staging server for analysis, so you can step through the code in debug-mode.

In this circumstance, you need a reverse proxy server / load balancer in front of your servers to be able to redirect traffic. There are lots of proxy servers out there but HAProxy and Nginx are popular choices. For example HAProxy is used by StackOverflow and Github which means it's been heavily tested in the field.

HAProxy

Using HAProxy, it's relatively easy to setup a rule that forwards a single IP address to a test/development server for you to debug the code. In the diagram below the yellow device's traffic gets singled out and routed to the test box by HAProxy

So how do you configure this? Well, let's start with a basic HAProxy config below. As a minimum, the config needs to supply a frontend section and a backend section. In the example below, the frontend section listens on port 23 for incoming TCP/IP connections. When it gets an incoming connection, it uses the servers defined in my backend servers to route traffic behind the proxy server

# HAProxy
frontend my_proxy_server
bind 192.168.137.100:23
mode tcp
default_backend my_app_servers

backend my_app_servers
mode tcp
balance roundrobin
server app1 192.168.137.101:23
server app2 192.168.137.102:23

Nice and simple!

To route a specific IP address to a different server you need to use the access control list command (acl). It looks like this:

# HAProxy
frontend my_proxy_server
bind 192.168.137.100:23
mode tcp
default_backend my_app_servers

acl test_sites src 192.168.100.1 192.168.100.2
use_backend my_test_server if test_sites 

backend my_app_servers
mode tcp
balance roundrobin
server app1 192.168.137.101:23
server app2 192.168.137.102:23

backend my_test_servers
mode tcp
server app1 192.168.137.103:23

The two key lines are:

acl test_sites src 192.168.100.1 192.168.100.2
use_backend my_test_server if test_sites 

acl test_sites src 192.168.100.1 192.168.100.2 sets up a acl rule named test_sites. It is activated when a client connects to HAProxy with the IPaddress of 192.168.100.1 or 192.168.100.2

If the acl rule is true, the second line use_backend my_test_server if test_sites uses the my_test_servers block which diverts all traffic to the test server 192.168.137.103

Also nice and simple!

Useful HAProxy Commands

  • sudo apt-get install haproxy Install haproxy
  • haproxy -v This command checks haproxy is up and running
  • sudo haproxy -c -f /etc/haproxy/haproxy.cfg After you have made changes to the config file, this command checks the file to make sure its valid.
  • sudo service haproxy restart This command restarts haproxy

HAProxy Gotcha!

As good as HAProxy is, there is a gotcha. And HAProxy's gotcha is that it does not proxy UDP traffic. It's a TCP/HTTP load balancer. This is a shame. But there is light at the end of the tunnel. If you want to port forward/proxy UDP traffic you might want to check our Nginx which fully supports UDP aswell as TCP.

So what about Nginx?

As mentioned, Nginx can also be used as a reverse proxy for tcp/udp traffic. The config file is similar to HAProxy in that it is split up into two ends. Upstream and server with the server section listening on a port and directing the traffic to a upstream block. Here is an example of what I first wrote while playing with Nginx based on tutorials on the web.

# Warning. This doesn't work because you cant use the "if" statement in a stream context!
stream {
    upstream prod_backend {
        server 192.168.137.129:23;
    }
    upstream test_backend {
        server 192.168.137.131:23;
    }

    server {
        listen 23;

        # This line will complain with the error 
        if ( $remote_addr = 192.168.137.132 ) {
            proxy_pass test_backend;
        }

        proxy_pass prod_backend;
    }
}

That's nice and simple. There is only one thing. The above config file doesn't work!!. It turns out that you cannot use the if statement in a stream context! It works just fine in a http context but not a stream context.

The alternative solution is to use the map statement instead like this:

# This version works!
stream {
   upstream prod_backend {
      server 192.168.137.129:23;
   }

   upstream test_backend {
      server 192.168.137.131:23;
   }

   map $remote_addr $backend_svr {
      192.168.137.140 "test_backend";
      default "prod_backend";
   }

   server {
      listen 23;
      proxy pass $backend_svr;
   }
}

In the above config, the map function takes the nginx built-in variable $remote_addr and compares it to a list of lookups in the block. When it finds a match it sets the variable $backend_svr equal to the right-hand-side. So, when the ip address is set to 192.168.137.140, the $backend_svr variable gets set to test_backend which is used as the upstream backend

This works and now we are back to the same implementation as the haproxy version.

Useful Nginx Commands

  • sudo apt-get install Nginx Install Nginx
  • sudo nginx -vGet version of Nginx
  • sudo nano /etc/nginx/nginx.confEdit the Nginx config file
  • sudo /etc/init.d/nginx startStart Nginx
  • sudo /etc/init.d/nginx stopStop Nginx
  • sudo /etc/init.d/nginx restartRestart Nginx
  • systemctl status nginx.servicecheck status of nginx service

Nginx UDP Gotcha!

Even though Nginx supports UDP load balancing there is also a gotcha!!! It doesn't perform session persistence out of the box. This means that protocols like OpenVPN will not work as it uses a persistent channel. During my testing I could see a new session for every packet that came over the wire. Session persistence is available in Nginx Plus which is an expensive paid for version of nginx. At some point this feature might trickle down to the free version but it does not look like it has made its way down at this time of writing.

Wrapping up...

That's it. Its fairly simple to setup TCP load balancing in both HAProxy and Nginx, but there are difficulties if you have a persistent UDP protocol.


Contact Me:  ocean.airdrop@gmail.com

Sunday 26 November 2017

Commodore 64 Basic

Well, that was a nice trip down memory lane!

I have just come across the C64 Mini over at https://thec64.com. It's not being released until 2018 but it's a mini replica of the Commodore 64 with USB and HDMI ports.

The Commodore 64 was my very first computer. Of course I played the games! Buggy Boy & Commando instantly spring to mind. But the cool thing about the C64 was that it booted up into the basic language! I have fond memories of typing out programs from the back of magazines only for them not to work. The result of a rouge typo somewhere!

But I do remember experimenting with the basic language because the user manuals for the computer covered the BASIC language. Amazingly I have just found the manuals preserved online here.

Isn't it funny that the GOTO keyword is considered harmful today in software development, but it introduced me to scrolling my name infinitely up a TV screen!

10 PRINT "Ocean Airdrop Woz Here"
20 GOTO 10
RUN

Anyway, what I didn't realise at the time was that Commodore Basic was actually licensed from Microsoft. Commodore worked with Microsoft to produce a ROM based version. This was in the early days of Microsoft so their name wasn't mentioned anywhere! However according to Wikipedia Microsoft included an easter egg into the version 2. If you typed:

WAIT 6502, 1

...it would print Microsoft on the screen. There is a great article about this over at the cached version of the defunct site pagetable.com here. It says: "Legend has it Bill Gates himself inserted this easter egg after he had had an argument with Commodore founder Jack Tramiel, just in case Commodore ever tried to claim that the code wasn’t from Microsoft".

Who knows how much of that is true, but who can argue with a sentence that begins with "Legend has it...."

Back to the C64 Mini! Will I get this replica when it comes out?

Nahh... It's probably just a Raspberry Pi in a fancy case. Also, the keys don’t work and are just for show. I guess you do get the joystick though. I think I'll wait for the Amiga version to show up!


Contact Me:  ocean.airdrop@gmail.com

Sunday 5 November 2017

Reading the contents of emails in an inbox.

The .NET framework comes with a SMTP email client to send out emails in the box and is simple to use:

var msg = new MailMessage("oceanairdrop@gmail.com", toList, subject, messageBody);
var client = new SmtpClient(server).Send(msg);

But recently, I wanted to be able to read the contents of emails in an inbox. Thats when I went searching in the framework only to find out there there is no support for IMAP in the .NET framework. That's when I found Mailkit and its awesome!

Mailkit can query in inbox in all sorts of ways and works really well. That brings me to this blog post as its a mental bookmark for me if I ever need to do this kind of thing in the future.

To use it, there is nuget package you can use to pull down the library and easily include it in your project:

Install-Package MailKit 

Whats the code like? Well, here is an example of reading an inbox using the library:

using (var client = new ImapClient())
{
    client.ServerCertificateValidationCallback = (sndr, cert, chain, errors) => true;

    client.Connect("server", 993, true);
    client.Authenticate("username", "password");

    client.Inbox.Open(FolderAccess.ReadWrite);

    // The Inbox folder is always available on all IMAP servers...
    var inbox = client.Inbox;
    inbox.Open(FolderAccess.ReadOnly);

    Console.WriteLine("Total messages: {0}", inbox.Count);
    Console.WriteLine("Recent messages: {0}", inbox.Recent);

    // let's search for all messages received after 18/07/2017 with "Funny Joke" in the subject...
    var query = SearchQuery.DeliveredAfter(DateTime.Parse("2017-07-18"))
                           .And(SearchQuery.SubjectContains("Funny Joke"));

    foreach (var uid in inbox.Search(query))
    {
        var message = inbox.GetMessage(uid);
        Console.WriteLine("[match] {0}: {1}", uid, message.Subject);
    }

    foreach (var summary in inbox.Fetch(0, 10, MessageSummaryItems.Full | MessageSummaryItems.UniqueId))
    {
        Console.WriteLine("[summary] {0:D2}: {1}", summary.Index, summary.Envelope.Subject);
    }

    client.Disconnect(true);
}

Wow... That was easy! And here is an example of sending an email using the SMTP client in Mailkit:

using (var client = new MailKit.Net.Smtp.SmtpClient())
{
    client.Connect("server"); 
    client.Authenticate("username", "password");

    var messageToSend = new MimeMessage();
    messageToSend.From.Add(new MailboxAddress("Ocean Airdrop", "oceanairdrop@gmail.com"));
    messageToSend.To.Add(new MailboxAddress("Ocean Airdrop", "oceanairdrop@gmail.com"));
    messageToSend.Subject = "How you doin'?";

    messageToSend.Body = new TextPart("plain")
    {
        Text = @"some message goes here!"
    };

    client.Send(messageToSend);
    client.Disconnect(true);
}

Contact Me:  ocean.airdrop@gmail.com

Saturday 28 October 2017

C++ 2017

Wahooo!!! C++ 17 has now been published as an ISO standard!!

I haven't used C++ for a long time as C# is my daily driver these days. But I will always remember when I moved from C++ to C#, being so much more productive as the language was a more safer environment and there was less things you had to worry about.

However, I always try to keep up-to-date with news over in the C++ camp. The C++ language will always have a special place in my heart and I have plenty of war stories and battle scars!. Yes, C++ is an overly complicated language with years of baggage but with every new feature that is added there is less and less sharp bits to impale yourself on.

The only thing I would like the standards committee to do is actually remove old/legacy features from the language. And maybe provide a compiler switch to enable them for backwards compatibility. This would mean you have to opt-in to use those legacy features. Unfortunately I don't think this will ever happen.

Of course, there was a time when C++ languished but that all changed in 2011 with C++11. It added so many features to the language like lambdas and auto, unique pointer and shared pointer that it changed the way you code. For example, now, when coding C++ they say if you're writing "new" or "delete" you're doing it wrong! Instead you should be using make_shared() or make_unique() which means you don't have to worry about memory leaks (as much).

C++ 20

What piqued my interest is what is coming down the pipe for C++20 and beyond. Big things are brewing in the C++ world and C++ 20 is where all the big action is. It's 3 years away but it promises:

Okay, so I kinda snuck metaclasses in that list. It might be a little too early for them to make the cut for C++20. But a guy can dream can't he?

The biggest problem I have with C++ at the moment is the #include header system. It's soo old and antiquated. Coming from Java or C# where they have a module system the #include system is painful. But hopefully C++20 will fix that with it's new module system (And then maybe we can get an official package manager).

C++ Core Guidelines

However, the thing about C++ is it's as "old as god's dog" which means when searching on the internet, you need to make sure you are reading about the latest stuff. While there maybe less sharp pointy bits, for the most-part the old stuff is still there and you need to know what to avoid!

You don’t want to be reading old out-of-date information. Thankfully the C++ guys (Bjarne & Herb) are working on the C++ Core Guidelines. Apparently, these Guidelines are a "set of rules designed to help you write modern, safe C++ – saving you time and effort as well as making your code more reliable." Microsoft even have a nuget package add-in for Visual Studio that performs code analysis to check your code for compliance!

Wrapping Up

In a world where we have modern, new and shiny languages like Rust, Swift & Kotlin you'd be forgiven for thinking there is no place for C++. That it's time to retire the old dog and put her out to pasture (tounge == in-cheek). Of course, we know that's not the case when we are talking about a language as important as C++. It's just good to see that C++ is alive and well and I am watching with interest to see how the language continues to evolve.


Contact Me:  ocean.airdrop@gmail.com

Friday 27 October 2017

INotifyPropertyChanged & Fody

We all know what the INotifyPropertyChanged interface does. It can be used to raise an event when a property of a class changes. Then in another section of code we can subscribe to these events and perform certain actions based on the application needs. It's all very cool but also old hat and pedestrian.

But the thing with this interface is that you can end up writing a lot of boiler plate code! For example, take this simple person class:

public class Person
{
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public DateTime DateOfBirth { get; set; }
}

It's nice. It's neat.. All is good with the world. But then you decide you want to be notified when one of the fields in the class is changed, so we introduce the INotifyPropertyChanged interface. Suddenly, for every property we find ourselves expanding the above code like this:

private string m_firstName;
public string FirstName
{
    get
    {
        return m_firstName;
    }
    set
    {
        OnPropertyChanged("FirstName", m_firstName, value);
        m_firstName = value;
    }
}

For every property you need to include a backing field, then fill in the getter and setter functions yourself and in the setter field ensure you raise the property changed event handler.. Geez! If you need to add this to a number of objects within a sizeable project, the work can quickly become monotonous.

The full class definition now looks like this:

class PersonNotify : INotifyPropertyChanged
{
    private string m_firstName;
    public string FirstName
    {
        get { return m_firstName; }
        set
        {
            OnPropertyChanged("FirstName", m_firstName, value);
            m_firstName = value;
        }
    }

    private string m_lastName;
    public string LastName
    {
        get { return m_lastName; }
        set
        {
            OnPropertyChanged("LastName", m_lastName, value);
            m_lastName = value;
        }
    }

    private DateTime m_dateOfBirth;
    public DateTime DateOfBirth
    {
        get { return m_dateOfBirth; }
        set
        {
            OnPropertyChanged("DateOfBirth", m_dateOfBirth, value);
            m_dateOfBirth = value;
        }
    }

    public event PropertyChangedEventHandler PropertyChanged;

    public void OnPropertyChanged(string propertyName, object before, object after)
    {
        if (PropertyChanged != null)
            PropertyChanged.Invoke(this, new PropertyChangedEventArgs(propertyName));
    }
}

This is the situation I was in when I thought to myself “surely there must be a better way?”. And, in the internet age, if you can think of it, chances are someone else has already implemented it!

Enter Fody! You can find the nuget package here and install it with nuget like this:

Install-Package PropertyChanged.Fody 

It’s a great little utility which, at compile time, looks for classes that implement the INotifyPropertyChanged interface and implements the backing fields for each property of your class as well as raising the event for you.

With fody installed we can now revert back to our original class with just a couple of small modifications:

public class Person : INotifyPropertyChanged
{
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public DateTime DateOfBirth { get; set; }

    public event PropertyChangedEventHandler PropertyChanged;

    public void OnPropertyChanged(string propertyName, object before, object after)
    {
        if (PropertyChanged != null)
            PropertyChanged.Invoke(this, new PropertyChangedEventArgs(propertyName));
    }
}

In the code above we have implemented the INotifyPropertyChanged interface and added a small bit of code to raise an event when any of the properties change.

Side note: In my final code I also extract the event and OnPropertyChanged handler into a central base class which cleans up all the model classes that derive from it.

Now, when you build it you will see this output in the Visual Studio build window:

1>------ Build started: Project: FodyTest, Configuration: Debug Any CPU ------
1>    Fody: Fody (version 2.0.0.0) Executing
1>      Fody/PropertyChanged:    No reference to 'PropertyChanged' found. References not modified.
1>    Fody:   Finished Fody 52ms.
1>    Fody:   Skipped Verifying assembly since it is disabled in configuration
1>    Fody:   Finished verification in 0ms.
1>  FodyTest -> C:\OceanAirdrop\TempCode\FodyTest\FodyTest\bin\Debug\FodyTest.exe
========== Build: 1 succeeded, 0 failed, 0 up-to-date, 0 skipped ==========

Now that’s nice and saves us a lot of work. Here's the full implementation:

class Program
{
    static void Main(string[] args)
    {
        var p = new Person();
        p.PropertyChanged += PropertyChangedEvent;
        p.FirstName = "Berty";
        p.LastName = "Burnstein";
        p.DateOfBirth = DateTime.Now;
    }

    private static void PropertyChangedEvent(object sender, PropertyChangedEventArgs e)
    {
        var propertyChanged = (OceanAirdropPropertyChangedArgs)e;
        Trace.WriteLine(string.Format("{0} changed from {1} to {2}.", 
        propertyChanged.PropertyName, propertyChanged.Before, propertyChanged.After));
    }
}

// Create your model objects as normal, but derive from BaseData and INotifyPropertyChanged
public class Person : BaseData, INotifyPropertyChanged
{
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public DateTime DateOfBirth { get; set; }
}

// Create a base class that implements the INotifyPropertyChanged interface and raises the event
public class BaseData : INotifyPropertyChanged
{
    public event PropertyChangedEventHandler PropertyChanged;

    public void OnPropertyChanged(string propertyName, object before, object after)
    {
        if (PropertyChanged != null)
            PropertyChanged.Invoke(this, new OceanAirdropPropertyChangedArgs(propertyName, before, after));
    }
}

// Create own event args that capture property value before and after!
public class OceanAirdropPropertyChangedArgs : PropertyChangedEventArgs
{
    public OceanAirdropPropertyChangedArgs(string propertyName) : base(propertyName) { }

    public OceanAirdropPropertyChangedArgs(string propertyName, object before, object after) : base(propertyName)
    {
        Before = before; After = after;
    }
    public virtual object Before { get; }
    public virtual object After { get; }
}

So there we have it. I have written this blog post as a reminder for me, the next time I need to implement notification changes in another project. Fody is a nice little utility to be aware of and kept in our software toolbox.


Contact Me:  ocean.airdrop@gmail.com

Thursday 31 August 2017

Professional Winform Controls & Libraries

Yes, yes, I know what you’re thinking. Winforms development! But… Erm… Isn’t Winforms dead?

I wouldn’t say Winforms is dead - let’s just say it’s “been done”! It’s complete. Whilst the Winforms framework is not as hot or current as it once was, it’s still very capable for a lot of business tasks and needs. And it’s still my go-to choice for developing in-house “line of business apps” that have no need to be online.

The fact of the matter is, there are some apps which don’t need to be put on a mobile phone or be accessible from the web. The desktop isn’t going away anytime soon and traditional desktop applications still have their place. These line of business apps can be considered as "dark matter" apps which keeps the cogs of businesses turning. The majority of people will never see them but they are out there being used daily.

Good software solves business problems no matter what form it comes in. As developers, we should always be looking for the minimum viable product. What’s the least amount of code we need to write to get the job done to solve the business needs? The majority of time a simple traditional desktop application will do the trick.

As an aside, I also make use of ClickOnce which gives you all the deployment benefits of a Web App inside a company. You publish an update and the next time your users load the app they are on the latest code.

But just because Winforms is old technology doesn’t mean the applications can’t be fancy-pants-looking! Because this area is so well-trodden there is a rich ecosystem of free controls out there which you can use to spice up your app. These apps don’t have to be boring battleship grey! You’re not stuck with the controls you get by default in Visual Studio.

With that said, here’s a rundown of some controls I have used or come across which add shine to any app:

Krypton .NET WinForms controls

"The Krypton Suite of .NET WinForms controls are now freely available for use in personal or commercial projects." That’s a quote straight from the GitHub pages of ComponentFactory.

This is a great library and includes a ribbon control, docking controls, an enhanced tab control (which they call a navigator). This is definitely one to check out!

Check it out here: https://github.com/ComponentFactory/Krypton

Syncfusion .NET WinForms controls

Syncfusion now has a community license and is available to all!

“The community license is our way of giving back to the community,” says Daniel Jebaraj, vice president of Syncfusion. “We want to support the individual and small business developer by offering our tremendous capabilities at no cost, and with no expiration date. What we offer is not a subset of Essential Studio but the real deal.

Customers using the community license will receive the exact same bits that we ship to our other customers. What is more exciting is that we offer free technical support to every customer licensed under the community license!”

Syncfusion has a whole slew of controls but of interest to me is their Data Grid and Spreadsheets controls with their excel like UI.

Check it out here: https://www.syncfusion.com/products/communitylicense

HTML Renderer

This is a fantastic little HTML framework. It’s a lightweight HTML Rendering library which means you can embed any HTML UI elements in any control you want and expand them beyond their original implementation. For example, I have previously used this to embed HTML in a DataGridView Cell.

Check it out here: https://github.com/ArthurHub/HTML-Renderer


Advanced DataGridView

I use this control all the time. It provides excel like filtering over multiple columns.

Check it out here: https://github.com/davidegironi/advanceddatagridview

ObjectListView

This is a flexible replacement for the built in ListView control. It’s feature rich and has animations, filtering, drag-drop, and even a tree list version.

Check it out here: http://objectlistview.sourceforge.net/cs/index.html

Microsoft Chart Controls

Need to display a chart? Microsoft's own chart controls have a large selection to choose from with great documentation

Check it out here: https://www.microsoft.com/en-gb/download/details.aspx?id=14422

CefSharp - That’s Chrome all up in your Winforms!

CefSharp enables you to bundle the open source Chromium web browser in your application, with the added ability to execute code in JavaScript land from C# and vice-versa. This tool is powerful!

Check it out here: https://github.com/cefsharp/CefSharp

Excel EPPlus

If you’re writing a line of business app, sooner or later you’re going to come into contact with Excel. This library is your friend. It’s quite simply awesome. It allows you to generate advanced excel spreadsheets from a C# application.

Check it out here: http://epplus.codeplex.com/

Conclusion

There you have it. While Winforms might no longer be classed as one of the cool kids, you can still be very productive in it and these frameworks imbue you with the power to create good looking applications.


Contact Me:  ocean.airdrop@gmail.com

Sunday 5 March 2017

Programmatic Analysis of Wireshark Log Files using C#

The other day, I wanted to perform some Wireshark filtering on a .pcap file to obtain a count of the packets found for a large number of IP addresses.

I wanted to find out the number of tcp retransmissions for a specified IP address, as well as the count of TCP resets for each IP address. And finally, I wanted to get a count of the number of "keep alive" packets for each IP address.

Okay, so this is pretty easy to perform in Wireshark. Just filter the traffic with the following filters:

tcp.analysis.retransmission && ip.addr == 1.2.3.4
tcp.flags.reset == 1 && ip.addr == 1.2.3.4
tcp.analysis.keep_alive && ip.addr == 1.2.3.4  

But I didn't want to go through the user interface for hundreds of different IP addresses. I wanted to do this programatically, in code.

Now, there are a couple of different approaches you can take here depending on your requirements.

At first I used PcapDotNet. This is a great library and you can walk the packets in the file and explore the individual properties of a packet.

Simply download the binaries from here. Then reference them in your project and you're off!

The code to get up and running is simple. The code below uses the function IncomingPacketHandler to walk every packet in the .pcap file:

class Program
{
   static int m_packetNumber = 0;
     
   static void Main(string[] args)
   {
      string file = @"C:\WireSharkAnalysis\capture2.pcap";
      // Create the offline device
      OfflinePacketDevice selectedDevice = new OfflinePacketDevice(file);
         
      // 65536 guarantees that the whole packet will be captured on all the link layers
      int readWholePacket = 65536;
      
      // read timeout
      int readTimeOut = 1000;
     
      using ( PacketCommunicator communicator = selectedDevice.Open( readWholePacket, PacketDeviceOpenAttributes.Promiscuous, readTimeOut))
      {
         communicator.ReceivePackets(0, IncommingPacketHandler);
      }
   }

    private static void IncommingPacketHandler(Packet packet)
    {
        // This function will get called for every packet in the .pcap file!
        m_packetNumber++;

        Console.WriteLine( packet.Timestamp.ToString( "yyyy-MM-dd hh:mm:ss.fff") + " length:" + packet.Length);

        var testIP = new IpV4Address("10.1.1.1");

        if (packet.Ethernet.IpV4.Tcp.IsReset == true )
        {
            // do something  
        }

        if (packet.Ethernet.IpV4.Tcp.ControlBits.HasFlag( PcapDotNet.Packets.Transport.TcpControlBits.Acknowledgment) == true &&
            packet.Ethernet.IpV4.Tcp.ControlBits.HasFlag( PcapDotNet.Packets.Transport.TcpControlBits.Push) == true )
        {
            // do something       
        }

        if (packet.Ethernet.IpV4.Tcp.IsReset == false)
        {
            // do something  
        }

        if (packet.Ethernet.IpV4.Source == testIP)
        {
            // do something  
        }

        Console.WriteLine();
    }
}

You can perform deep inspection of the packet as seen below in the "quick watch" window. In short, you get access to everything.

Very neat!

But here's the thing - I wanted to get a count of the number of tcp retransmissions. That information is not available as part of each individual packet. Apparently Wireshark "compares the sequence numbers to what it has determined to be the next expected sequence number" to allow you to filter them.

This is easy in Wireshark. Just type "tcp.analysis.retransmission" into the filter bar and it will display the TCP Retransmissions. The filter is part of the TCP analysis that Wireshark performs when reading the packets.

So, that's when I turned to my second option: Using tshark.exe (the command line version of Wireshark) to read in a file and pass my filter to. I wrapped the tshark command line tool in a simple class, but the main work-horse is this function here:

public int ProcessFilter(string filter)
{
    // Stage 1: Setup the wireshark filter command
    m_tsharkCmd = string.Format(m_tsharkTemplate, m_tsharkPath, m_pcapFile, filter);

    // Stage 2: Clear the output
    m_tsharkOutput.Clear();

    // Stage 3: Run the command!
    using (m_process = new Process())
    {
        m_process.StartInfo.WorkingDirectory = @"C:\";
        m_process.StartInfo.FileName = Path.Combine(Environment.SystemDirectory, "cmd.exe");
        m_process.StartInfo.UseShellExecute = false;
        m_process.StartInfo.RedirectStandardInput = true;
        m_process.StartInfo.RedirectStandardOutput = true;
        m_process.StartInfo.RedirectStandardError = true;
        m_process.OutputDataReceived += OutputHandler;
        m_process.ErrorDataReceived += OutputHandler;
        m_process.Exited += new EventHandler(process_Exited);
        m_process.Start();
        m_process.BeginOutputReadLine();
        m_process.BeginErrorReadLine();

        // Send a directory command and an exit command to the shell
        m_process.StandardInput.WriteLine(m_tsharkCmd);
        m_process.StandardInput.WriteLine("exit");
        m_process.WaitForExit();
        m_process.Close();               
    }

    // Stage 4: Output the number of packets!
    int packetCount = GetPacketCount();
    return packetCount;
}

It's a quick and dirty approach but hey, it works!

Using this approach, means I can loop around on a number of different IP addresses and issue the previous Wireshark filters I had above to find the number of tcp packet retransmissions.

tcp.analysis.retransmission && ip.addr == 1.2.3.4
tcp.flags.reset == 1 && ip.addr == 1.2.3.4
tcp.analysis.keep_alive && ip.addr == 1.2.3.4  

This small project can be found on my GitHub pages here. It's a simple stub project but can easily be expanded to perform different types analysis on any .pcap file.


Contact Me:  ocean.airdrop@gmail.com

Saturday 18 February 2017

Using Redis as a Data Caching Server

Everyone knows how important caching is in computing. We are surrounded by caching managers here, there and everywhere. You've got your CPU caching in the form of L1 and L2 cache which stores the next bit of data the CPU needs. GPUs have a cache. All hard drives come equipped with an on-board cache. Database servers heavily cache your most used queries and query plans, web servers cache the most used data and web clients (browsers) cache client side data.

Basically, caching is everywhere.

Why am I mentioning this? Well, recently I have been redesigning a framework that plans to make use of cached data to save trips to the database and Redis looks very enticing.

But what's the cost/impact of not using a cache?

Well, I found this great link "Latency Numbers Every Programmer Should Know", which orders the latency numbers of accessing the "CPU cache" all the way up to connecting to a computer over the open internet.

The numbers shouldn't surprise you. Basically accessing memory is wicked fast! (It can be referenced in 100ns). If you have the data in memory versus retrieving it from the database on another server it's a no-brainer! - Memory wins every time.

That's where Redis comes in. - It's a fast in-memory data cache/NoSQL database.

It's an open source server software and they have a client for every programming language imaginable. It's also used by websites like Twitter, GitHub, Stack Overflow, etc.

Installation (on Windows)

The download pages of Redis explain that the "Microsoft Open Tech group" support the windows port of Redis. It's available to download via nuget here. Just: "Install-Package Redis-64"

Incidently, it looks like Azure allows you to setup a Redis cache on their service. I wonder if they are using the same version of Redis they are hosting on nuget?!

When you run the server from the command line you will see the Redis logo and that the server is accepting connections on port 6379 (the default port).

This runs the Redis server in interactive mode.

To install Redis as a windows service run this command redis-server --service-install redis.windows.conf --loglevel verbose. To uninstall the service run this command: redis-server --service-uninstall

You can then go and start Redis as a windows service

That's it! The server is installed!

ServiceStack.Redis Client

Once you have got the server installed, the next thing you will need to do is download a client. The best client for C# is ServiceStack.Redis

So what's the code look like?

Well, it's nice and simple. Just what we like. For example the following code adds some data to the cache, then retrieves it later on.

using ServiceStack.Redis;

class Program
{
    static void Main(string[] args)
    {
        string someOrder = "some data to cache";

        // Store some Data
        using (RedisClient client = new RedisClient("127.0.0.1", 6379))
        {
            client.SetValue("order:1", someOrder);
        }

        // Later on! - Lets get some data from the cache.
             
        using (IRedisClient client = new RedisClient("127.0.0.1", 6379))
        {
            var result = client.GetValue("order:1");
        }
    }
}

Expiring Data

As we all know, one important aspect of a cache is knowing when to let go of the cached data. This can give you headaches if you hold on to data too long. This is application specific and you will need to understand the time limits for when certain types of data goes stale.

Here's a more complicated example that serialises a class as well as expiring the data after 10 seconds.

using ServiceStack.Redis;
class Person
{
    public string Name { get; set; }
    public int Age { get; set; }

    public Person(string n, int a)
    {
        Name = n; Age = a;
    }
}

class Program
{
    static void Main(string[] args)
    {
        Person person1 = new Person("Rick", 45);
        Person person2 = new Person("Morty", 18);

        string jsonString = "";

        int hours = 0, mins = 0, secs = 10;
        TimeSpan expireIn = new TimeSpan(hours, mins, secs);

        // Store some Data
        using (RedisClient client = new RedisClient("127.0.0.1", 6379))
        {
            jsonString = Newtonsoft.Json.JsonConvert.SerializeObject(person1);
            client.SetValue("person:1", jsonString, expireIn);

            jsonString = Newtonsoft.Json.JsonConvert.SerializeObject(person2);
            client.SetValue("person:2", jsonString, expireIn);
        }

        // Later on! - Lets get some data from the cache.
             
        using (IRedisClient client = new RedisClient("127.0.0.1", 6379))
        {
            var result = client.GetValue("person:2");
            
            if ( result != null)
            {
                var pTemp = Newtonsoft.Json.JsonConvert.DeserializeObject(result);
            }
        }

        // Much Later on (after 10 secs)! - Lets get some data from the cache.
        Thread.Sleep(1000 * 12);

        using (IRedisClient client = new RedisClient("127.0.0.1", 6379))
        {
            var result = client.GetValue("person:2");

            // The result will be null because more than 10 secs have elapsed
            if (result != null)
            {
                var pTemp = Newtonsoft.Json.JsonConvert.DeserializeObject(result);
            }
        }
    }
}

Wrapping Up!

There's a lot more to Redis than I have mentioned here. It has a full suite of commands and can be used in many other ways apart from a cache - but it makes a great cache manager!


Contact Me:  ocean.airdrop@gmail.com

Sunday 12 February 2017

Code and Music

Do you listen to music when you code?

I do.

For me, there is no better experience of falling into a code-hole while being wisked away by a backing track of your liking.

I find the two compliment each other.

If you have a sufficiently meaty problem and an inkling of how you're going to solve it, music can sweep away time.

You tumble down that code-hole and before you know it, you look up and a couple of hours have passed.

Something happens in our brains, where your logic and reasoning about a problem develops to the point where you get a nice flow going. It's hard to explain but it happens.

I am in no way precious about what type of music I listen to. In fact I listen to a lot of different genres but I find nothing compliments coding better than a DJ set.

I love music (who doesn't). I collect music. I grew up being fed and watered on all types of dance music genres and back in the day, listening to the BBC Essential Mix was a staple! - In today's landscape music has never been more accessible with podcasts, youtube, soundcloud, mixcloud etc. There is so much fuel out there to use to code to.

My takeaway? Let the music play, and don't forget to turn it up to 11.


Contact Me:  ocean.airdrop@gmail.com

Sunday 5 February 2017

SeriLog & Application Diagnostic Logging

Show me an application that doesn't log anything and i'll show you an application which is hard to debug.

Every application we write should have a healthy sprinkling of diagnostic logging embedded throughout its logic. A trace to show us what's happening on the inside from the outside. In the past, I've hand-rolled my own bespoke logging classes to output either to file, database or simply console.

Not any more!

The problem has been solved. It's been done. There are libraries out there like Log4Net or Nlog

But from now on, I intend to standardise all my C# code to use SeriLog.

SeriLog is an example of a library where so many developers have poured in so much time that any home grown library cannot compete.

Why waste your time? Just "nuget" it into your project and be done. Move on.

Why is it so cool?

Because it's simple. And because it supports many, many outputs. They call them sinks and there is a sink for everything as you can see in the image above.

But the real cool part is that you can pass a data model to SeriLog and it will serialise the properties in JSON format.

Want to see some sample code? The github pages have plenty of samples there but the code sample below demonstrates how SeriLog logs to:

  • The screen
  • A rolling log file on disk
  • And a database table
using Serilog;

namespace OceanAirdrop
{
    class SomeModel
    {
        public int UserId { get; set; }
        public string Name { get; set; }
        public DateTime BirthDate { get; set; }
    }

    public class SeriLogger
    {
        public static void Main(string[] args)
        {
            string connName = "db-connection-string";

            Log.Logger = new LoggerConfiguration()
                .MinimumLevel.Debug()
                .WriteTo.LiterateConsole()                 
                .WriteTo.RollingFile("AppLogs\\SomeCoolApp-{Date}.txt")
                .WriteTo.MSSqlServer(connName, "some_cool_app_log")
                .CreateLogger();

            try
            {
                Log.Information("Some important log information!");

              var someData = new User { UserId = 256, Name = "Ocean Airdrop", BirthDate = DateTime.Now };

                Log.Information("Some more logging: {@User}", someData);
            }
            catch (Exception ex)
            {
                Log.Error(ex, "Something went kaboomski!");
            }
        }
    }
}

Contact Me:  ocean.airdrop@gmail.com

Sunday 29 January 2017

Data Modelling and Diagramming Database Schemas

So, I have recently been designing a schema for a new database system. As everyone knows, this is a technical process and requires a good understanding of the problem domain to make sure you collect all the business concepts and relationships.

For these kind of things, you can't beat a good entity relationship diagram to visually see the makeup of your database tables and the relationships between them.

This post is about the "Toad Data Modeller" application which is what I've been using to model my database design. It's such a great tool that I think it should be present in the toolbox of any DBA.

Now, I know Microsoft includes a rudimentary diagramming tool in SQL server. Just right-click on "Database Diagrams" and add the tables you want to include.

But this feature hasn't been shown any love since SQL Server 2000 and the diagrams are very simple. Microsoft Visio is another tool I've previously used and it allows you to create an ERD of your database, although I think you may need the professional version to reverse engineer an existing database. It's not intelligent in any way apart from creating a pretty diagram.

But my new favourite tool to use is Toad's Data Modeler.

Below is an example screenshot of Toads Data Modeler on the sample "AdventrueWorks" database.

It allows you to reverse engineer (read in) an existing database, although the freeware version (the one I'm using) only allows you to reverse engineer a maximum of 25 tables at a time.

Once the model has been loaded you can easily add foreign keys between tables, add unique constraints, and basically ensure there are no floating tables in your design. You can hover over constraints to see which database fields they are attached to. If you have made changes to the actual database schema under the covers, you can "Update the model from the database". And if you have made any changes in the app, you can export the SQL.

In short, it's awesome! It's a great tool to use to scan over your tables to make sure all the required relationships of each individual table is met. Oh, and did I mention it works with all the major database systems. And its free!

If you haven't got it download it from here.


Contact Me:  ocean.airdrop@gmail.com

Sunday 8 January 2017

Another Year. Another Year's Checklist!

Ahhh yes… Another year! Another fresh start and set of goals for the year. So many good intentions.

On this blog I try to detail my meandering adventures in technology, so I thought I'd write up the tech-subjects I've had in the back of my mind but never got around to looking into.

You never know, maybe this is the year I can tick some of them off!!

Truth be told, I have had this laundry list of things-to-do-and-learn swirling around my head from last year. And maybe the year before as well.


  • C++ 17 will be released this year. I've been keeping up with all the new toys that have been added to the language since C++11. Even though I don't code in C++ anymore (C# is my daily driver) I still have a fondness for C++. It would be good to have a refresher on some of the newer language features.

    Although, it seems like C++20 is going to be the BIG release with modules (finally) and async/await billed as features!

  • Install and play with .NET Core on Linux and port a simple application. .NET Core 1.0 was released last year and I have not yet played with it. Although in my defense they do say to wait until version 2 of a product. You know, for the bigger bugs to be ironed out.

  • Install Linux (again) on one of my machines!! If I am going to test and play with .NET Core I will need to have Linux anyway. Is this the year I finally say goodbye to Windows? I say this every year. And the answer is always no! I love Visual Studio too much!

  • Take a look at the Nim Programming Language. There was a time when I wanted to check out Lua. That urge has now been replaced by Nim.

  • I have had an idea for a side-project for ages. This project would be a "Windows Explorer IMDB Overlay" over the filesystem and would link to IMDB. This TMDbLib API looks like it will do just the job. Its an API for the themoviedb

  • I would like to play around with SkiaSharp and UrhoSharp. They are both graphics libraries. Skia is a C++ open source 2D graphics library and is used in Chrome, Firefox, Android etc. SkiaSharp is a C# port. Urho3D is a cross-platform 2D and 3D game engine. UrhoSharp is a port to C#
  • .
  • Pay some attention to my Amazon EC2 instance. I have a server idling in the sky and it would be good to do something useful with it.

  • Think of an IoT project to do on my Raspberry Pi 2 device. I did have an idea to try and link up my go-pro to it. I did find a website that detailed the go-pro wifi commands and thier general query structure.

  • Play more xbox! (Why isn't this top of the list?)

  • Read up on algorithms.

  • Read the book CODE by Charles Petzold. It's an old book but it's supposed to be a nice walk through of the basic concepts of computers. I've been meaning to read it for ages now.

  • That reminds me! Read more books. I already have an Audible account but I need to set aside more time to physically read. You know. The old-fashioned way. :)

  • Re-Install and play with Gimp and Blender (...Again, hopefully this time I will grok them)

  • I've been meaning to look into Vue.js and Ractive.js. They are both light HTML templating engines and they both use moustache style templating. They both look good and if I ever do a client-side web project I hope to look into these.

  • Play around with PostgreSQL.. I use SQLServer daily and am very adept with it but I have never used PostgreSQL which is just as capable. There is a windows version of the database here. I can't believe the download is only 137MB. In comparison, the download for SQL Server Management Studio (just the interface) is knocking on a gigabyte!

  • On a side note (while were talking of databases), try to think more "set based" than "procedural based" when writing SQL queries. Why do I always think in cursors instead of set based operations? Damn my imperative mind!

  • Finish off my Xamarin side-project and post it on my GitHub. Or at the very least post what I have done so far.

  • Python. If I need to knock up a quick throw-away script (say to do something with the filesystem) I want to try and remember to use Python (just to keep my toe in).

  • Take another look at TypeScript now its reached 2.1! Version 2.1 now compiles async/await code down to EcmaScript 3 without needing Babel! Yay!

  • Take a look at the Kotlin Programming Language! Apparently you can write Android apps in it now!

  • Unity can run on all platforms but can you use it for a line of business app? Probably not, but the scripting language they use is C# and it would be interesting to dig into.

  • .NET Standard 2.0 is due to come out this year (apparently when Visual Studio 2017 drops). I deffo want to play with this. Is it the future of .NET?

So there it is. In short, I seem to have a lot of spinning plates of "things I hope to get around to do/read up on/investigate". There’s a lot out there to learn. I guess that's one of the reasons I have a not-to-do list!


Contact Me:  ocean.airdrop@gmail.com

Popular Posts

Recent Posts

Unordered List

Text Widget

Pages