Sunday 14 July 2019

Fixing IntelliSense when using RazorMachine

The best thing about using Razor to generate server-side HTML is that you get full intellisense when writing your HTML files. The thing is, when you use RazorMachine in a console application, out of the box, I was getting intellisense errors.

There are a couple of things you need to do to fix these issues. They consist of altering the app.confg file and making sure a couple of ASP.NET DLL's are located in the bin directory

The code for this blog post can be found on Github here: OceanAirdrop/TestRazorMachineHTMLTemplate

Unload & then Reload project to see changes

The first thing to know, when making changes to the app.config files is that, to see the changes you have made you need to unload and reload the project.

Once you have unloaded your project, to reload you can select the "Reload Project" option:

If you don't reload the project you won't see the changes. It's either that or restart visual studio!!

Problem 1 - Fixing 'implicitly typed local variable' error CS8023

If you use the var keyword in your cshtml:

then you might see errors like this:

To fix that, and other errors, we need to add the following to the section to the app.config file

The actual snip that should be copied and pasted into the app.config file is as follows:

<system.web>
  <compilation debug="true" targetFramework="4.5">
    <assemblies>
      <add assembly="System.Core, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" />
      <add assembly="System.Data, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" />
      <add assembly="Microsoft.CSharp, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" />
      <add assembly="System.Web.Helpers, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" />
      <add assembly="System.Web.WebPages, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" />
      <add assembly="System.Web.Mvc, Version=5.2.3.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" />
      <add assembly="System.Data.Linq, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" />
    </assemblies>
  </compilation>
  <pages>
    <namespaces>
      <add namespace="System.Web.Helpers" />
      <add namespace="System.Web.Mvc" />
      <add namespace="System.Web.Mvc.Ajax" />
      <add namespace="System.Web.Mvc.Html" />
      <add namespace="System.Web.Routing" />
      <add namespace="System.Web.WebPages" />
    </namespaces>
  </pages>
</system.web>

Problem 2 - Fixing missing Helpers, WebPages errors CS0234

In a console application, by default we do not have access to the ASP.NET DLLs that are expected by Razor so we will see the following errors in the "Errors List" window

To fix these issues you need to make sure you have got the 'Microsoft.AspNet.Mvc' nuget package installed on your project. This will bring along a couple of dependencies with it.

That will give you the necessary DLLs that Razor depends on.

Now, it should be noted that you only need these nuget packages so that you can copy the following DLLs to the root Bin directory.

  • System.Web.Helpers.dll
  • System.Web.Mvc.dll
  • System.Web.WebPages.dll

After you have these DLLs you could remove the nuget packages Microsoft.AspNet.Mvc and Microsoft.AspNet.WebPages.

Problem 3 - Copying DLLs to root of Bin directory

Now, even with the necessary DLL's you will still see intelisense errors. It turns out because razor is a ASP.NET web component it expects the DLLs to be in the root of the Bin directory and not the Bin\Debug or Bin\Release directory.

So if your directory looks like this:

...you need to copy the following DLLs to the root of the bin directory

For my test project, I setup a batch file that world run as a post-build event

and make sure you call the batch file runs on every build

Another way of doing this would be to copy the following DLLs to a 3rdParty directory and modify the batch file to copy these to the root Bin directory on every build.

  • System.Web.Helpers.dll
  • System.Web.Mvc.dll
  • System.Web.WebPages.dll

That way, you are not taking a dependency on extra DLLs that you are not using.

Problem 4 - Adding DLLs

I had another problem that my console app was using a seperate DLL in my project. Razor pages couldnt find the types to provide intellisense and the "Error List" window gives you errors like: The type 'StarWarsCharacter' is defined in an assembly that is not referenced.

To fix errors like this, you need to add the DLLs to the assemblies section of the app.config file.

Code

I have a sample project on Github that can be found here: OceanAirdrop/TestRazorMachineHTMLTemplate


That's it!

With those couple of changes in place in the console project, we get full intellisense (with no errors!) when editing cshtml razor files. Yay!


Contact Me:  ocean.airdrop@gmail.com

Thursday 4 April 2019

Passing data into a SQL Trigger

Okay, that title is a BIG fat lie! - You can't actually pass data to a trigger!

But, I had a need to simulate the passing of data! - You see I had a trigger that performed some audit logging when data was changed and wanted to tie this event up with other events that had happened, external to the trigger.

The scenario went something like this:

  1. - There is code running in C# land that triggers an update statement in the database.
  2. - The update statement fires a trigger which writes data to another table.
  3. - The C# code continues and also writes some data to another table.

The problem I had, was how do I link these 3 events together? What I would like to do is generate an id in step 1 and then pass that id to the trigger to use but you can't do that.

...Well, not explicitly!

SQL Magic Glue

So after researching on the interweb, I found out that the SQL magic glue i needed was CONTEXT_INFO. You can read all about it here:

You can basically think of it as a per session cookie.. That means applications can set it and then retrieve it again later on.. Basically perfect for my trigger problem.

Abridged Version

Basically, the abridged version is something like this. In C# land, first, we generate a an id. I am using a GUID..

-- Step 01: Generate a guid
public void SomeFunction()
{
   // Let's create a log guid in C# land that we can use to associate events in different tables
   var logGuid = Guid.NewGuid();
}

...Or if you are doing it on the database:

-- Step 01: Generate a guid
declare @logGuid uniqueidentifier = (select NEWID());
print @logGuid 

Then, take that unique id and save it to the context info for our db connection..

-- Step 02: Set The Guid as this sessions context info
DECLARE @ContextInfo varbinary(100);
SET @ContextInfo = cast(@logGuid as varbinary(100));
SET CONTEXT_INFO @ContextInfo;

If your doing that in C# land you will be executing the above sql using the normal SqlConnection class. This is effectively injecting the GUID for this particular SQL connection, that the trigger can pick up and use.

Now, we need to alter the audit trail triggers like this:

ALTER TRIGGER [schema].[tr_audit_table_name] ON [schema].[update]
AFTER UPDATE, DELETE AS
BEGIN
       
    -- Get The @logGuid Guid from the session context 
    DECLARE @logGuid uniqueidentifier = CAST(CONTEXT_INFO() as uniqueidentifier);

    PRINT @logGuid 
    
    -- Now you can use this guid and save it to the db along with any other 
    -- insert/update statements you make here

    update [database].[schama].[other_table] set some_column = 'update from trigger', log_guid = @logGuid
    where some_id = 14
...

Now, back in C# land when you perform the the update statement:

update [database].[schama].[table_name] set some_column = 'wibble-wobble', log_guid = '@logGuid' where some_id = 14

The trigger will fire and will grab GUID that we associated with the C# sql connection! That GUID was setup outside the trigger but the trigger was able to access it and use it.

Wrap Up

Now that's what I call noice!...


Contact Me:  ocean.airdrop@gmail.com

Sunday 31 March 2019

RazorMachine HTML Templating Library

The other day, I needed a HTML templating library for C#. I was writing a service which sends out reports and wanted the contents to be formatted HTML. Basically, I was looking for {{ mustache }} style templates similar to JavaScript's handlebars.

As it happens there is a port of of mustache for C# called, wait for it....... Nustache! 🙂 (Gotta love that name!).

But, as with all these things the power Google led me to find out what else was out there. As it turns out there are a number of good tempting libraries for C# including:.

Microsoft Razor

I initially was settling on using DotLiquid until I remembered Microsoft's Razor Pages! They are front and center in the new ASP.NET Core world and Blazor (the experimental C# WebAssembly framework) also uses them.

The great thing about Microsoft's Razor pages is that they easily allow you to mix and layout C# model objects alongside HTML. Visual Studio gives you syntax highlighting and code completion of your C# objects and on top of all that you can add C# if and loop statements to control flow. Its a nice templating library with great IDE support. All you need to do is rename the .html file to .cshtml!

Here's a quick example of Razor syntax:

<h1>Your Action is @modelObj.ActionType </h1>
<p>Some text goes here </p>

@if (!string.IsNullOrEmpty(modelObj.Name))
{
   <p>Hello @modelObj.Name! </p>
}

<ul>
   @foreach(var favItem in modelObj.FavouriteList)
   {
     <li>@favItem.Name ( @favItem.Description )  </li>
   }
</ul>

<h3>Thats all folks!</h3>

But I didn't know if it was possible to use Razor in a simple C# console application, outside of ASP.Net. After turning back to Google, I found a plethora of libraries out there including:

The list goes on and on, but the two which looked the most promising was RazorEngine and RazorMachine. RazorEngine is the most popular however, for my purposes I went with RazorMachine because it does not produce temporary files (as RazorEngine does).

Helper Class

Using RazorMachine, I knocked up this helper class to help with reading and converting of my cshtml project files

public class RazorView
{
    private static RazorMachine m_razorMachine = null;
    public string TemplateName { get; set; }
    public object Model { get; set; }

    public RazorView(string templateName, object model)
    {
        TemplateName = templateName;
        Model = model;
        Initialise();
    }

    public static void Initialise()
    {
        if (m_razorMachine == null)
            m_razorMachine = new RazorMachine();
    }

    public string Run()
    {
        string content = null;

        if (!string.IsNullOrEmpty(TemplateName))
        {
            var htmlTemplate = ReadTemplate(TemplateName);
            content = m_razorMachine.ExecuteContent(htmlTemplate, Model).Result;
        }

        return content;
    }

    private string ReadTemplate(string fileName)
    {
        string rptTemplate = string.Format(@"{0}\Views\{1}", 
                AppDomain.CurrentDomain.BaseDirectory, fileName);

        if (!File.Exists(rptTemplate))
            return "";

        var htmlTemplate = System.IO.File.ReadAllText(rptTemplate);

        return htmlTemplate;
    }
}

With the above class we can then use it as follows:

Person modelObj = new Person();
modelObj.FirstName = "Bart";
modelObj.LastName = "Simpson";

string htmlContent = new RazorView("Test.cshtml", modelObj).Run();

Wrap Up

That's about the long and short of it. I have put a sample project on my github account here https://github.com/OceanAirdrop

free hit counter
Contact Me:  ocean.airdrop@gmail.com

Sunday 3 February 2019

OWIN WebAPI with Self Signed HTTPS

I recently wondered what would be involved in setting up a self signed HTTPS certificate to use with an internal WebAPI OWIN service that listens on a specific port number. Due to the interesting problems that cropped up as I went through the steps needed, I thought I would write them up as a reminder for myself in the future.

As we know, all services should allow you to connect to them over HTTPS if you are transferring over sensitive data (think passwords, etc). If you are in a situation where you are writing many local services, instead of adding every self-signed certificate you create to the trusted root store, a better alternative is to create your own root CA certificate.

This root CA certificate authority should be installed into the trusted store of the machine. What does this give us? Well, this way, each service would have a certificate of its own that is created/signed from our root CA certificate and because we have trusted the root cert all other certificates made from it will be automatically trusted (This is the chain of trust).

MakeCert

Now if you have created any certificates in the past you will have certainly come across the Windows makecert command. To create a CA certificate with this utility is pretty straightforward.

makecert.exe -r -n "CN=OceanAirdropCA" -pe -sv OceanAirdropCA.pvk -a sha512 -len 4096 -b 01/01/2019 -e 01/01/2040 -cy authority OceanAirdropCA.cer
pvk2pfx.exe -pvk OceanAirdropCA.pvk -spc OceanAirdropCA.cer -pfx OceanAirdropCA.pfx
pause

Then, later on when you want to generate client certificates from this CA cert, again, its pretty simple:

makecert.exe -iv OceanAirdropCA.pvk -ic OceanAirdropCA.cer -n "CN=OceanAirdropClient" -pe -sv OceanAirdropClient.pvk -a sha512 -len 4096 -b 01/01/2019 -e 01/01/2040 -sky exchange OceanAirdropClient.cer -eku 1.3.6.1.5.5.7.3.1
pvk2pfx.exe -pvk OceanAirdropClient.pvk -spc OceanAirdropClient.cer -pfx OceanAirdropClient.pfx

But the problem with the MakeCert utility is that its as old as gods dog and turns out that it doesn't populate certain fields needed to be validated by Chrome. Here's the error chrome gives you:

If you go to chrome it tells you that this certificate is not secure. This is because since chrome 58, you have to specify a subjectAltName as part of the certificate. But the makecert command does not allow you to generate a "Subject Alternative Name".

Using PowerShell to create our CA certificate

There are a couple of alternatives we could use to create the certs.. Either OpenSSL or Powershell.. This stack overflow post nicely explains how to create a self-signed cert using OpenSSL, but I opted to go the Powershell route.

Here's what to do. First start Powershell as administrator and issue each of the following commands in order:

// Step 01: Setup params for new self signed cert. Notice that the key usage can 'CertSign'
$params = @{
  DnsName = "OceanAirdrop.com CA"
  KeyLength = 2048
  KeyAlgorithm = 'RSA'
  HashAlgorithm = 'SHA256'
  KeyExportPolicy = 'Exportable'
  NotAfter = (Get-Date).AddYears(10)
  CertStoreLocation = 'Cert:\LocalMachine\My'
  KeyUsage = 'CertSign','CRLSign' 
}

// Step 02: Actually create our CA cert and store it in the variable $rootCA
$rootCA = New-SelfSignedCertificate @params

// Step 03: Export the public CA key to file
Export-Certificate -Cert $rootCA -FilePath "C:\certs\OceanAirdropRootCA.crt"

// Step 04: Export the public/private key to file (as pfx file)
Export-PfxCertificate -Cert $rootCA -FilePath 'C:\certs\OceanAirdropRootCA.pfx' -Password (ConvertTo-SecureString -AsPlainText 'securepw' -Force)

The above commands first create a root CA certificate named OceanAirdropRootCA. It then exports the certificate to disk in both the .crt and .pfx file formats.

Now we have a root CA certificate, ho-ho-ho!

But, notice in the above picture, this certificate is not trusted on my machine.. That's because it needs to be installed into the root certificate store. So lets install the root CA key that this client key was generated from into the computers trusted certificate store.

I have found that you can type the command "certmgr.msc" from the start menu to access the certificates on your machine BUT by default it only shows your your personal certificates, but we want to install our certificate at the machine level.

So, to install the certificate type "mmc" from the start menu to bring up the mmc snap-in. Then click File->"Add/Remove Snap-In" where you will be presented with this dialog. Select Certificates and press "Add". From here, this will give you the option to select "Computer Account".

In the Trusted Root Certification Auuthorities, right click on the Certificate folder then select Import:

Then go through the wizard process:

Now, when we inspect our root certificate we can see that it is now trusted:

Using PowerShell to create a client certificate

At this point we have a root CA certificate that we can start using to mint/sign new client certificates. These client certs will be used by our OWIN services.

Again, open up Powershell and run through the following commands:

// Now at some point later on you might want to create another certificate that is signed by your CA key

// Step 05: First lets load the CA key from disk into the variable $rootCA
$rootCA = Get-PfxCertificate -FilePath "C:\certs\OceanAirdropRootCA.crt"

// Step 06: Setup params for new self signed cert.
$params = @{
  Signer = $rootCA
  KeyLength = 2048
  KeyAlgorithm = 'RSA'
  HashAlgorithm = 'SHA256'
  KeyExportPolicy = 'Exportable'
  NotAfter = (Get-date).AddYears(2)
  CertStoreLocation = 'Cert:\LocalMachine\My'
}

// Step 07: Actually create cert and store in variable: $appCert1
$appCert1 = New-SelfSignedCertificate @params -Subject *.my.domain -DnsName my.domain, *.my.domain

// Step 08: Export the keys to file to store securlly
Export-Certificate -Cert $appCert1 -FilePath "C:\certs\appCert1.crt"
Export-PfxCertificate -Cert $appCert1 -FilePath 'C:\certs\appCert1.pfx' -Password (ConvertTo-SecureString -AsPlainText 'securepw' -Force)

When creating a client certificate, the majority of times the root ca, key length and expiry date hardly ever change... But the common name and DNS names do.. So below I declare some of the options that don't change for New-SelfSignedCertificate in a variable names "params". Then on the call to New-SelfSignedCertificate I speciffy the -Subject and -DnsName fields on the command line. Here's me running through those commands:

This produces a certificate that looks like this:

Notice that the Subject Alternative names are now correctly populated.

Registering the port with windows

Okay, at this point, we have created our certificates.. And I assume we have a WebAPI service running and listening on a particular port number. For us to use our client certificate, we need to register the port with windows, then bind the certificate to the service. Here's how to do that.

My service is going to be listening on port 9000 for https traffic.. Open up a command prompt and issue the following command:

netsh http add urlacl url=http://+:9000/ user=everyone

If at a later date you need to delete the reservation (as I did in testing) you can use this command:

netsh http delete urlacl http://+:9000/

// If you want to show a list of bindings:
netsh http show urlacl >c:\bindings.txt
start notepad c:\bindings.txt

Binding the certificate to the service

At this point, we have the client certificate in the machine and have registered our listening port with windows. The next thing we need to do is run a command to bind our new SSL certificate to our application port (9000).

netsh http add sslcert ipport=0.0.0.0:{{port} certhash={{thumbprint}} appid={{app-guid}}

There are three variables we need to plug into this command. The port number (which is 9000 in our case), the certifcate thumbprint and a guid. You can pick up the thumb print of the certificate from here (you just need to remove all the spaces from the string):

For the guid, you can either generate a random guid or you can pick up the application guid from your visual studio project:

Once you have got those 3 pieces of information, you would issue the command as below

If you are playing around a bit, at some point you will want to delete the binding. To delete a SSL certificate from a port number use the following command:

netsh http delete sslcert ipport=0.0.0.0:9000

Opening ports on the firewall

If you intend to access this service from another machine make sure you open the port in the windows firewall.. You can do this from the command line using these commands:

netsh advfirewall firewall add rule name="OceanAirdop_Svc_9000" dir=in action=allow protocol=TCP localport=9000
netsh advfirewall firewall add rule name="OceanAirdop_Svc_9000" dir=out action=allow protocol=TCP localport=9000

Back to the code

Now, if we jump back to our code, all we need to do is alter the base address from http to the new https

static void Main(string[] args)
{
    try
    {
        string baseAddress = "https://L-03067:9000/";

        // Start OWIN host 
        using (WebApp.Start(url: baseAddress))
        {
            Console.ReadLine();
        }
    }
    catch (Exception ex)
    {
    }
}

Now when we visit the endpoint over http all works as intended!


WebAPI Exceptions

Okay, it wasn't all plain sailing... I encountered these exceptions along the way which may raise their heads.. If they do, here are the solutions

WebAPI Exceptions - Failed to listen on prefix

If, when you run your code, you get the following error: "Failed to listen on prefix"....

...check to make sure you have registered the correct http scheme. So it should be netsh http add urlacl url=https://+:9000/ user=everyone (notice the s) and not: netsh http add urlacl url=http://+:9000/ user=everyone

WebAPI Exceptions - Access Denied Exception

If you are debugging your service and get an "Access Denied" error back like this, make sure you start Visual Studio in Administrator mode.

You can do a quick check by looking at the title bar of Visual Studio. It should say (Administrator)

Wrap up!

Thats it... I can now visit my WebAPI service over https using a self-signed certificate!

website counters
Contact Me:  ocean.airdrop@gmail.com

Sunday 21 October 2018

Flutter Mobile UI Framework

Flutter is a new mobile UI framework from Google, that allows you to create native applications on iOS and Android. At the time of writing this blog post, it's still in beta but apparently will hit 1.0 soon!

Now, I gotta say, when I first learned about Flutter, and what it was all about, I was very intrigued!

You see, Flutter is the equivalent of a game-engine but for everyday apps!! I'll let that sink in. It's got an OnDraw function just like a game engine's loop would call (they call it the build function). Under the covers it uses the C++ library Skia to draw pixels directly on the screen. And because its C++ underneath, that means Flutter is cross-platform. Needless to say, this warrants a closer look from anyone who is writing mobile applications.

I'll say it up top! - Flutter looks very cool.

Stepping back a bit, I've always thought that as C++ is the lingua franca of languages, which all systems support, could it be possible for someone to write a cross platform mobile UI framework using SDL or SFML. It was a fleeting thought of course because that's no simple undertaking. Yes, I know we already have the C++ Qt library and QML but the masses aren't exactly flocking to use it.

Then there are game-engines like Unity that can run on all platforms. Again, I have always wondered if it would be be feasible/possible to use it to write a line-of-business app? I mean, it uses C# (which is a win) and the app would run everywhere Unity runs! That's a lot of places. But again, you would have to write the GUI widgets/controls yourself. The amount of work involved would be massive!

Enter Flutter.

Turns out Google was thinking the same thing, because that's what Flutter is all about. It's written in both C++ & Dart. C++ is used for the heavy lifting and the OpenGL calls (via skia), but the core Flutter library is written in Dart. Yes, that means we will be writing Dart code to write the Widgets/Components on the screen.

Incidentally, I first came across Skia when I found SkiaSharp. Skia itself is a C++ open source 2D graphics library and is used in Chrome, Firefox, Android etc and SkiaSharp is a C# port which is used in Xamarin Forms. But I digress...

So what about the GUI elements? Well, the framework calls them Widgets and provides you with containers, buttons, Forms, List Views, Srollers, etc. It's from these foundational Widgets that you create your own custom controls.

Why dart?

Ughhh. Dart? That was my first thought... Why didn't they choose Kotlin. The last mobile app I wrote used Kotlin which is a very nice language and because it is built on the JVM means you can interface with the large set of existing Java libraries out there.

So, at first I was a little disappointed to learn that flutter uses Dart as its primary language. But you know what? After playing with Dart a bit, it's actually a pleasant language.

Now there have been some changes to Dart, since it first came out that I was not aware of. For example, when Dart 1 originally shipped it had optional typing. But thankfully Google saw the error of their ways and for Dart 2 strong mode has been adopted. As Flutter uses Dart 2, this means our code is like C# & TypeScript in that types are required or inferred. This is good news as it means you get all the benefits of static typing.

There are some weird things to get used to though. For example Dart does not have public or private keywords!! Yes, classes can't have private variables!! Coming from other languages where the keywords "public, protected & private" are the norm, these keyword omissions just seems weird to me. There are other oddities like multi-threading that I won't get into here. However, for the most part, if you're coming from JavaScript, Java or C# then Dart is a pleasant language to code in.

I still kinda think it's a shame they didn't go with a more powerful language like Kotlin, especially as JetBrains are working on Kotlin Native which would have solved the iOS side of the story. However I suspect the choice to use Dart was a political one as Google own and control Dart so there won't be any nasty surprises further down the road (cough Oracle)!

Code Sharing versus UI Sharing

Here's the really cool thing I love about Flutter. And that's the fact that the user-interface you design and write, is shared across Android and iOS. Which, when you think about it, makes sense... I mean all it is doing is spitting out the same pixels to the screen. The framework doesn't care that the screen in question just happens to be an iOS screen with a notch on the top! The frameworks game-loop is just calling its OnDraw function.

If you want the same brand theaming across iOS and Android this is great, however Flutter is also platform-aware so if you want to have apple style widget on iOS you can! The choice is yours!

When Xamarin first came out their selling point was that you wrote the business-logic once but then had to write the UI layer twice. In the early days the benefit to developers was more about the "Code Sharing" aspect of the platform rather than the "UI sharing". Of course, since then, Xamarin have introduced Forms which aims at helping developers also write the UI code once.

In comparison, it seems to me that the biggest benefit of Flutter is the "UI sharing" aspect of the framework and the code sharing is secondary. This nicely brings me on to API's...

Accessing Android & iOS APIs - Plugins!

One of the big selling points of Xamarin is that they give you access to the "full spectrum of functionality exposed by the underlying platform". Basically, any APIs you can call from a native iOS or Android app, you can call with Xamarin. It exposes all those native APIs from C#. From a developers point-of-view, you gotta say, that's pretty cool.

The Flutter folk have gone down a different path. Instead of exposing every conceivable API you could call, they allow you to write plugin packages that you can share with the community, These plugins allow you to write dart code that interfaces with a specific set of APIs. This means you can always open up the iOS or android sub-folders and write as much Obj-C/Swift or Java/Kotlin code as you like.

I think anyone who takes even a cursory glance at Flutter will agree that they have got the UI side down. But I think the crucial part of the Flutter story is going to be the interop with the native APIs. For example, if you choose Xamarin, you know the platform APIs are exposed and available for you to call from C# but with Flutter, you are either going to have to rely on the community to provide a plugin that interfaces with the APIs you want or roll up your sleeves and write the platform specific code yourself (and potentially twice).

It's going to be interesting to watch how this side of the story unfolds for Flutter.

Reactive Views & Hot Reloading

Flutter uses the same model as React for building UI. If you think about visual designers like we have had with the likes of Winforms, WPF, Android & iOS, they all focused on the visual layout of the UI alone. One thing that stood out for me straight away was that there isn't a visual designer for flutter. No XAML or XML layout editor.

With Flutter though, you write your UI directly in code! Now before you throw your teddy-bear out the pushchair, it turns out that this isn't a problem because of the fast hot-reload. Essentially the code you change is instantly reloaded in the Emulator which means you get instant feedback.

Here is the obligitory screenshot from the main Flutter website, showing the hot-reload in action:

There is also something called the Widget Inspector which is very cool. Whatever part of the screen you touch, it will tell you what class that region of the code relates to. It allows you to jump straight from the UI to the code!

Building Blocks of an App

There are lots of great articles on how to get started with Flutter, (like these ones here, here & here) but once you have got to grips with all the Widget aspects of the framework, you start to think and have questions about the other building blocks of a mobile app, like I did.

Here are some of the things I have noticed after reading up on Flutter.

Coming from C# / Java where we have great JSON libraries like Json.Net or Gson, I was a little taken aback to realise there was nothing equivalent in Dart. Also, because the official docs mention that runtime reflection is disabled in Flutter the chances of getting a library like Json.Net or Gson are slim to none! Instead, this article explains what it is like to perform JSON serialization/deserializtion in Flutter.

For network comms, the last app I wrote made use of the excellent OKHttp library. Of course, there is no equivalent in Dart. If you intend to do anything like certificate pinning you can make use of this plugin or write the plumbing code yourself.

If you want to make use of a local database, thankfully SQLite is available via the sqflite plugin. This is great write-up on how to incorporate it in your app.

If you are familiar with MVVM pattern, then this article here could be a useful read which uses the ScopedModel plugin. This allows you to automatically update the Widgets in the tree when the model is updated.

If you plan on writing an app that makes use of background notifications, then the latest beta version of Flutter now has a solution. The sample code for background execution of Dart code can be found here.

If you are looking at integrating a map in your application then using Leaflet could be an option until the Google Maps plugin is properly supported. (At the moment, it does not support iOS). There is a good write-up on how to use Leaflet here, and here. The plugin is available here.

At some point you might want to include a webview as part of the app. Yes, there is a plugin for that! This article has a good write-up here and the plugin can be found here .

Finally, I've been looking at what is available with regards to client-side encryption. If you have a need to encrypt/decrypt data using AES/SHA25 then this plugin library uses native platform implementations on both Android/iOS.

Wrapping Up

I am not sure when Flutter is going to hit 1.0. There is still over 4,000 open issues on Github so I guess there is a way to go before they iron out all the remaining severe issues, however, the framework has a lot of promise and is very exciting. Flutter is definitely one to watch!

hit counter

Contact Me:  ocean.airdrop@gmail.com

Sunday 7 October 2018

Installing RabbitMQ on Windows

I really like RabbitMQ!

If you have not come across RabbitMQ before, it's a messaging system that enables you to seperate your system out to communicate and send data between services or even other systems. If you want to ensure your system meets your reliability, scalability and performance requirements for today but also for tomorrow, then a messaging system like RabbitMQ is a great shout.

In a nutshell, RabbitMQ provides a way of architecting your system so that you can write small services that focus on one job that are scalable to meet your performance requirements.

What does that mean?

Well, when you are designing a large system, there are going to be different parts of the system that will want to communicate with each other. For example, sending commands/actions from one system to another or even just sending notifications.

So, instead of writing your traditional, monolithic application, which is just one big codebase, message queues enable you to write small services. These services are usually lightweight and focus on one job! So, for example, you could have a ProcessOrder Service and an AlarmHandler Service, etc.

But how do these services talk to each other? Well, that’s where message queues and RabbitMQ comes in.

RabbitMQ is the glue that enables asynchronous communication between each of our business layer components. At its core, RabbitMQ is a FIFO (First in, First out) message queue and its fast! - The messages are transient meaning they are not stored forever and at some point are picked up for processing. In addition to all this, RabbitMQ can guarantee that messages will be delivered to the destination no matter what happens!

Writing software this way has got lots of benefits. If one of the services is under heavy load, we can spin up extra copies of the service (even across more machines) to cope with the load (scalability). This is called horizontally scaling and means each service can scale independently of one another.

More importantly, because these services are independent from one another (and not in some monolithic application), we can alter the functionality of one service in isolation of the others knowing our change will only affect this service. This makes for a more maintainable system.

If you need to connect lots of things together, have great performance and need something which is robust and production-ready you can't go wrong. As I said, I really like RabbitMQ!

Best of all, and this is the amazing bit... it's free!

Installing RabbitMQ on Windows

RabbitMQ is available for install on both Linux and Windows. As I have recently installed RabbitMQ on a Windows system, I thought I would write up the installation process while it was still fresh! My perception is that RabbitMQ is better supported on Linux than it is on Windows and there are a couple of gotchas when installing on Windows.

Step 1 - Set Base Directory

Okay, before even installing RabbitMQ, the first thing we need to set up is the RABBITMQ_BASE environment variable. The reason for this, is by default the RabbitMQ database/queues are saved to disk on the C: drive, in the directory %APPDATA%\RabbitMQ. As I mention in this StackOveflow answer, this just seems wrong to me. So, just to make sure we have space, we set this variable up to force the database to be held on the D: drive.

Step 2 - Installation

With the RABBITMQ_BASE environment variable setup, we can now actually kick off the installer process. Now, RabbitMQ depends on Erlang which means we are going to need to install Erlang fist before RabbitMQ.

At the time of this blog post, I am downloading RabbitMQ Version 3.7.8 and this version supports Erlang 19.3.6 up to 21.0. You can find out which version of Erlang RabbitMQ supports by going here.

Once installed, you will be able to fire up the RabbitMQ Command Prompt

To confirm everything installed fine, run the status command from the RabbitMQ command line

rabbitmqctl status

Step 3 - Setup Admin User

By default, RabbitMQ comes with the guest/guest credentials. Let's first take the time to setup an administrator account. From the RabbitMQ command line run the following commands:

rabbitmqctl add_user adminuser password1
rabbitmqctl set_user_tags adminuser administrator
rabbitmqctl set_permissions -p / adminuser ".*" ".*" ".*"

You will be able to change the password from the web interface, but if you need to change the password from the command line, type:

rabbitmqctl change_password adminuser password2

Step 4 - Enable the Web Management Interface

By default RabbitMQ gets installed with everything off. Next, we need to enable the web interface.

rabbitmq-plugins enable rabbitmq_management

From the same machine you should now be able to log onto the management interface by visiting http://localhost:15672 and logging on with the admin user we previously created.

If at any point you need to disable the web interface, you can do so with the following command:

rabbitmq-plugins disable rabbitmq_management

Step 5 - Open Ports on Windows Firewall

To be able to access the management interface externally, we need to open some ports on the firewall. Run the following commands:

netsh advfirewall firewall add rule name="RabbitMQ_15672" dir=in action=allow protocol=TCP localport=15672
netsh advfirewall firewall add rule name="RabbitMQ_15672" dir=out action=allow protocol=TCP localport=15672

Whilst we are here, we also need to open port 5672 to allow clients to access RabbitMQ and add or remove work to the queues:

netsh advfirewall firewall add rule name="RabbitMQ_5672" dir=in action=allow protocol=TCP localport=5672
netsh advfirewall firewall add rule name="RabbitMQ_5672" dir=out action=allow protocol=TCP localport=5672

If you need to delete these rules at a later date, run the following command:

netsh advfirewall firewall delete rule name="RabbitMQ_5672"name protocol=TCP localport=5672

The next thing you need to do is open the management port 15672 on the Windows firewall, so that you can access the management interface from an external machine.

Step 6 - Setting up the config file

The next thing I did was to setup the config file. By default, RabbitMQ doesn't come with a config file so you will have to set one up yourself. You can find an example of the new rabbitmq.conf file here. If you read the documentation, it talks about setting up a new environment variable named RABBITMQ_CONFIG_FILE. However, it looks like there is a bug in the windows version of RabbitMQ as I could not get it to pick up the config file from this location.

So, as a work-around copy the config file to the RABBITMQ_BASE directory (which in my case is D:\RabbitMQBase).

Now, to get RabbitMQ to pick up the config file on Windows, we also need to additionally stop and restart the RabbitMQ windows service using the following rabbit commands:

rabbitmq-service stop
rabbitmq-service remove
rabbitmq-service install
rabbitmq-service start

When we then take a look at the log file after our restart, we can see that it has picked up our config file:

Now that we have the config file setup, we can specify things such as maximum memory usage and disk limits. For example, the following config setting pins the maximum memory usage to 1GB of RAM.

vm_memory_high_watermark.absolute = 1024MB

At this point, that's it! We have successfully installed RabbitMQ on our windows system!

RabbitMQ Diagnostics & Monitoring

Now, because a messaging system such as RabbitMQ is such a crucial component of the system, it would be good if we could be notified when there was a problem.

For example:

  • One of the Rabbit MQ nodes goes down
  • One of the queues stops receiving data
  • There are no consumers moving messages off a queue
  • The number of messages in a queue exceeds a certain number

That kinda thing. Thankfully, RabbitMQ has an API that allows you to query the service. In a browser, you can query the interface and get JSON data back about the service. For example:

Here is an example of the different endpoints you can programmatically call:

http://adminuser:password1@127.0.0.1:15672/api/
http://adminuser:password1@127.0.0.1:15672/api/queues
http://adminuser:password1@127.0.0.1:15672/api/queues/%2f/queue-name
http://adminuser:password1@127.0.0.1:15672/api/connections
http://adminuser:password1@127.0.0.1:15672/api/consumers

For a full list of all the available API endpoints, see this page a href="https://pulse.mozilla.org/api/">here/a>


Wrapping Up

That's it! Job done!


Contact Me:  ocean.airdrop@gmail.com

Sunday 30 September 2018

Outputting HTML Content to PDF in C#

If you type "C# PDF Library" into google you will find there are lots of different offerings out there.

My search turned up IronPDF which costs $399. We have got EOPdf which costs $749 or abcPDF at $329, Hey, what about ExpertPDF HtmlToPdf Converter which is $550.

But what about free?

Well after a bit more interweb searching, I came across HTML Renderer for PDF using PdfSharp. Now, I have previously used the HTMLRenderer in previous projects such as my own DataGridViewHTMLCell project so naturally, it was my first port of call.

But when testing out this library I hit a snag...

You see I want to be able to create bookmarks in PDF as well as forcing page breaks in HTML and, unfortunately, during my testing I was not able to to achieve this using "HTML Renderer For PDF". It seems, if all you want to do is throw some HTML together and convert it to PDF and don't care about formatting, then HTML Renderer will be perfect for you, especially as its light-weight, performant and 100% managed code.

And so, my search continued!

..I guess at this point, I should back up and explain the requirements/features I am looking for in my PDF library

  • The ability to output to PDF (which is err.. self-explanatory!)
  • More importantly, the ability to output HTML to PDF
  • The ability for library to be used in Click Once App
  • The ability to create bookmarks in PDF Documents (This is important)
  • The ability to force page breaks in HTML (This is important)
  • The ability to set page orientation (landscape, portrait)
  • The ability to set page size (A4)

Now, before continuing I should say HTML Renderer is built on PdfSharp which looks very good and in theory should allow me to create bookmarks and page breaks but I could not get HTML Renderer to work for me. I didn't spend too much testing however and your mileage my vary.

OpenHtmlToPDF

Now, if you do enough digging eventually you will come across wkhtmltopdf. It's an "open source command line tool to render HTML into PDF using the Qt WebKit rendering engine" and it seems a lot of PDF libraries just sit on top of this library and leverage its functionality.

There are a number of libraries that make use of wkhtmltopdf such as NRecoPDF but what I liked about OpenHtmlToPDF is that the wkhtmltopdf.exe is held inside the OpenHtmlToPDF DLL as an embedded resource. This means, when it comes to apps that use Click Once deployment, everything just works as you don't have to manually specify the .exe as part of your deployment.

So, what does the code look like? Well, first install OpenHtmlToPDF via nuget and then run this simplest example:

private void TestOpenHtmlToPDF(string html, string fileName)
{
    var pdfBytes = Pdf.From(html).Content();
    System.IO.File.WriteAllBytes(fileName, pdfBytes);
}

Page Breaks

Like I mentioned, one of my requirements was to be able to programmatically force a page break from HTML. Fortunately, it wasn't long before I stumbled across this StackOverflow post that helped me here.

<!DOCTYPE html>
<html>
<head>
    <meta http-equiv="content-type" content="text/html;charset=UTF-8" />
    <title>Paginated HTML</title>
    <style type="text/css" media="print">
        div.page {
            page-break-after: always;
            page-break-inside: avoid;
        }
    </style>
</head>
<body>
    <div class="page">
        <h1>This is Page 1</h1>
        <p>some text</p>
    </div>

    <div class="page">
        <h1>This is Page 2</h1>
 <p>some more text</p>
    </div>

    <div class="page">
        <h1>This is Page 3</h1>
        <p>some more text</p>
    </div>
</body>
</html>

With the above HTML template, and the following code snippet below which sets the wkhtmltopdf setting "web.printMediaType" to true:

private void TestOpenHtmlToPDF(string html, string fileName)
{
    var pdfBytes = Pdf.From(html)
                      .WithObjectSetting("web.printMediaType", "true")
                      .Content();

    System.IO.File.WriteAllBytes(fileName, pdfBytes);
}

OpenHtmlToPDF produces the following PDF file. Notice that the bookmarks are automatically generated from HTML <H1> tags and that the content is separated across pages. Exactly what I was looking for.

In fact you can access any of the underlying wkhtmltopdf settings using code like the following:

private void TestOpenHtmlToPDF(string html, string fileName)
{
    var pdfBytes = Pdf.From(html)
                      .WithObjectSetting("web.printMediaType", "true")
                      .WithGlobalSetting("size.paperSize", "A4")              
                      .WithObjectSetting("web.printMediaType", "true")       
                      .WithObjectSetting("footer.line", "me a footer!")
                      .WithObjectSetting("footer.center", "me a footer!")    
                      .WithObjectSetting("footer.left", "left text")        
                      .WithObjectSetting("footer.right", "right text")       
                      .WithMargins(0.5.Centimeters())
                      .Content();

    System.IO.File.WriteAllBytes(fileName, pdfBytes);
}

There are lots of settings which can be passed to wkhtmltopdf. You can find a listing of them here.

Wrapping Up

All in all if you want a bit more control over the HTML/PDF content you produce then OpenHtmlToPDF and wkhtmltopdf might be the ticket. Its a thumbs up from me!

web counter
Contact Me:  ocean.airdrop@gmail.com

Saturday 23 June 2018

Auto Updating an Android App outside the Play Store

In my previous post, I explained how I was writing a private Android app that is only intended for personal use (basically a hobby project). Because of this, I do not intend to release it in the Google Play Store..

But that brings up an interesting question:

How am I going to update this app over time?

One of the benefits of hosting your app in the Google Play store is that you get the ability to auto-update your apps (amongst other things like in-app billing etc). But, as the official documentation explains, there are many ways you can distribute your Android app including:

  • Distributing through an app marketplace
  • Distributing your apps by email
  • Distributing through a website
  • This is an extra bullet point!

This blog post explains the steps I went through to get my Android app to auto-update via a website. In a nutshell, I wanted the app to be able to:

  • Check a website to see if an updated ..apk is available
  • If so, give the user the option to download the updated .apk
  • Start the install process

To break that down a little bit more, I needed to:

  1. Extract the app version from the embedded resources in the Android App
  2. Create a new WebAPI Endpoint that returns latest version of app and MD5 hash of the .apk
  3. Add the .apk file to IIS and setup Mimetype for successful download
  4. Create about page in Android app with a "check for updates" button
  5. Call the version API to see if the server has an updated version of the app available
  6. Call the web URL that points to the .apk file and download it using OkHttp
  7. Once downloaded, compare the hash of the file with the expected hash given by the API
  8. If sucessful, kick off the install process.
  9. Et Voila

Let's dig into the main areas of it..

App Version Number

The first thing we need to do on the Android side is for the App to have the ability to check its own version number.

The official Android page on versioning talks about the use of the versionCode number which can be used to track the app version. This integer is stored in the build.gradle file

and looks like this:

defaultConfig {
   versionCode 2
   versionName "1.1"
}

From code, we can access the integer using the following code snippet:

try { 
   PackageInfo pInfo = this.getPackageManager().getPackageInfo(getPackageName(), 0);         
   String version = pInfo.versionName; 
} 
catch (PackageManager.NameNotFoundException e) 
{ 
   e.printStackTrace(); 
}

Simples!

ASP.NET WebAPI Endpoint

Now that the Android app can retrieve its own version number, we needs to be able to call a WebAPI endpoint that returns the latest app version from the server to compare against. Something like:

[HttpGet]
public int GetCurrentAppVersion()
{
    var version = Convert.ToInt32(ConfigurationManager.AppSettings["AndroidAppVersion"]);
    return version;
}

This simple function is just reading the app version from the web.config file:

<appSettings>
    <add key="AndroidAppVersion" value="3" />
</appSettings>

..however it might be better suited for this field to be held in a database table somewhere so it can be easily updated. As it stands, I need to alter the web.config file every time I update the app!

Add the .APK file to IIS

The next job is to add the latest copy of the android .apk file to IIS. For this you simply copy the .apk file into the wwwroot directory.

Once you have copied the .apk to the root of the website directory, we need to allow IIS to download the .apk. For this you then need to add the following MIME Type to the web.config file:

 <staticContent>
        <mimeMap fileExtension=".apk" mimeType="application/vnd.android.package-archive">
 </staticContent>

Another way to do this would be to manually add the MIME Type using IIS Manager but I think using the web.config is the better option because it makes the solution more easily deployable.

Once you have done this, we can test the endpoint to make sure we can download the .apk file using a browser. The rationale behind this test is that if we can download the .apk file manually, our Android app will also be able to download it.

Downloading the .APK file using OkHttp

The next step is to download this .apk file from the Android app using the OkHttp library. Using Okhttp makes downloading a binary file mega easy. The following code is asynchronous and is very efficient . Once the file has finished downloading the onResponse function is called which saves the .apk file to the cache directory.

private fun downloadAPK() {

    toast("Please Wait.... Downloading App");

    val client = OkHttpClient()

    var apkUrl = "http://192.168.0.18:801/oceanairdrop.apk";

    val request = Request.Builder()
            .url(apkUrl)
            .build()

    client.newCall(request).enqueue(object : Callback {

        override fun onFailure(call: Call, e: IOException) {
            e.printStackTrace()
        }

        @Throws(IOException::class)
        override fun onResponse(call: Call, response: Response) {

            val appPath = getExternalCacheDir().getAbsolutePath()
            val file = File(appPath)
            file.mkdirs()

            val downloadedFile = File(file, "appUpdate.apk")

            val sink = Okio.buffer(Okio.sink(downloadedFile))
            sink.writeAll(response.body()?.source())
            sink.close()

            m_fileLocation = downloadedFile.toString()

            this@ContactInfo.runOnUiThread(java.lang.Runnable {

                // UI Code 
                toast("Successfully Downloaded File");

                try {
                    if ( m_fileLocation != "")
                    {
                        val mimeType = "application/vnd.android.package-archive"
                        val intent = Intent(Intent.ACTION_VIEW)
                        intent.setDataAndType(Uri.fromFile(File(m_fileLocation)), mimeType)
                        intent.flags = Intent.FLAG_ACTIVITY_NEW_TASK 
                        startActivity(intent)
                    }
                    
                    // close activity
                    finish()
                }
                catch (ex: Exception)
                {
                    Utils.LogEvent("AppUpdate", Utils.LogType.Error, ex.message.toString())
                }
            })
        }
    })
}

Once the .apk file has finished downloading, we need to launch it which will install it for the user. The code that runs on the UI thread is responsible for installing the .apk:

val intent = Intent(Intent.ACTION_VIEW)
intent.setDataAndType(Uri.fromFile(File(m_fileLocation)), "application/vnd.android.package-archive")
intent.flags = Intent.FLAG_ACTIVITY_NEW_TASK
startActivity(intent)

FileUriExposed exception

Now, there is one little problem with the above intent code which kicks off the install process. - The setDataAndType no longer works if you are targeting API 24 and above (Which is Android 7.0 - Nougat). If you are targeting API 24 and greater you will get a FileUriExposed exception. The new approach for this, is to use the new FileProvider class to launch the new app.

However, to cheat and side-step this issue you can call the following function on application start-up which will allow the proceeding code to work. Please see this reference post here for more information.

private fun enableLaunchAPK() {
   if (Build.VERSION.SDK_INT >= 24) {
       try {
           val m = StrictMode::class.java.getMethod("disableDeathOnFileUriExposure")
           m.invoke(null)
       } catch (e: Exception) {
           e.printStackTrace()
       }
   }
}

Wrapping Up

That's it.... All works. I now have an application that can auto-update. Now I just need to find some time to actually... you know... Update the app!


Contact Me:  ocean.airdrop@gmail.com

Popular Posts

Recent Posts

Unordered List

Text Widget

Pages