Thursday, 15 March 2018

Azure Functions: Exception while executing function...connection string is missing or empty

I've recently started to play with Azure Functions trying them in Visual Studio with C# and in Visual Studio Code with JavaScript.

I've focused on the Http triggers specifically basic CRUD  interactions with Azure Table storage which will become a future series of blog posts.

My first attempt in C# went fairly smoothly, created a function then hit run and have Visual Studio handle starting the functions runtime which I then executed via Postman and it returned me data read from my local Azure Storage Emulator.

I then tried to recreate what I'd just done using JavaScript using the V1 functions creating the function via the azure-cli template with a minor alteration to read from my existing table.

The Problem

I started the function using func run TestGet and then tried to hit it using postman only to get an error:

Exception while executing function: Functions.TestGet -> Microsoft Azure WebJobs SDK 'MyStorageConnectionAppSetting' connection string is missing or empty. The Microsoft Azure Storage account connection string can be set in the following ways:
1. Set the connection string named 'AzureWebJobsMyStorageConnectionAppSetting' in the connectionStrings section of the .config file in the following format , or 
2. Set the environment variable named 'AzureWebJobsMyStorageConnectionAppSetting', or
3. Set corresponding property of JobHostConfiguration.

This confused me greatly as I had used the default AzureWebJobsStorage setting exactly as I had with the C# version which had worked.

Trying to Google the error message turned up next to no results and the majority of those didn't actually have any relevant infromation but this GitHub issue provided the clue to the answer specifically this comment which mention the IsEncrypted setting.

What I had noticed is that when the functions runtime started up the first line in the command window was The input is not a valid Base-64 string as it contains a non-base 64 character and so I checked my IsEncrypted setting

{
  "IsEncrypted": true,
  "Values": {
    "AzureWebJobsStorage": "UseDevelopmentStorage=true;"
  }
}

and found that it was set to true and due to that when the functions runtime ran up it wouldn't use the appsettings.json/local.settings.json as it expected them to be encrypted and since it wasn't it seems it wouldn't read the settings that were there, so setting it to false allowed the runtime to read the setting

The cause

A comment on the Github issue suggested that the IsEncrypted setting should be false so to  check if I had caused the issue or it had been the azure-cli I created a new folder and ran the command
func init which creates the basic files needed for functions including the appsettings.json which looks like this:

{
  "IsEncrypted": true,
  "Values": {
    "AzureWebJobsStorage": "UseDevelopmentStorage=true;"
  }
}

So it would seem the azure-cli is the cause of this issue and whilst you need to ensure the values should be encrypted when deployed to production/live when working locally you need them unencrypted to be able to work.

Why didn't it happen in Visual Studio?

I wanted to know why I hadn't run into this when I was creating functions in Visual Studio so went back into VS and created a new functions project and checked what had been created in the local.settings.json:

{
    "IsEncrypted": false,
    "Values": {
        "AzureWebJobsStorage": "UseDevelopmentStorage=true",
        "AzureWebJobsDashboard": "UseDevelopmentStorage=true"
    }
}

As you can see it sets the IsEncrypted to false which is why I never had the problem with the C# functions.  As an experiment I set IsEncrypted to true and tried to run the functions and interestingly the functions runtime reported an error Failed to decrypt settings. Encrypted settings only be edited through 'func settings add'. which whilst not exactly clear does give a better phrase to search for.

Hopefully if anybody else runs into this problem they'll find it easier to resolve than I did.

Monday, 12 February 2018

WebApi controller - using anonymous types as return values

On a couple of projects I’ve worked on recently both have used anonymous objects to return data to a SPA UI as Json from WebApi controllers.

Microsofts recommended return type for WebApi controller methods is IHttpActionResult and they provide a variety of the helper methods to make the creation of the response easy e.g. Ok(), BadRequst(), etc

To return an anonymous object as Json is as easy as using the Json method and create the anonymous object as the parameter to the method:

public IHttpActionResult Get()
{
    return Json(new { id = 1 });
}

Why do this?

There is a real advantage in using this technique as it gives you a lot of flexibility in what you return, there is no need to define different view model classes for each "shape" of data you need so it cuts down on the amount of boiler plate code needed.

Unit Testing

Not everybody writes unit tests for controller methods which return data but if you do the use of IHttpActionResult as a return type makes it a little trickier than you would anticipate to be able to look at the Json returned.

If you try to use the strongly typed classes (IHttpActionResult, HttpContent, etc) you'll most likely find yourself going down a rabbit hole trying to get to the content of the response which eventually either leads to having to use reflection to get the data or using dynamic.

However, if we take a short cut and make use of dynamic straight away we can vastly simplify the code which gives us a test that looks like this:

[Test]
public void Should_return_id_in_reponse()
{
    var target = new DataController();
 
    dynamic response = target.Get();
 
    dynamic content = response.Content;
 
    Assert.That(content.id, Is.EqualTo(1));
}

By setting the return from the controller method to dynamic we avoid the need to explicitly call ExecuteAsync and using dynamic to access the content makes it easier for us to get hold of the content without the need for any Json wrangling.

At this point if you've created the test in the same assembly as the controller it will pass - success!

But, if you've created your test in a separate project when you run the test you'll get an error thrown when trying to check the id in the assert:

'object' does not contain a definition for 'Content'

If you've used dynamic before chances are you'll have seen this error previously as it occurs anytime you try to access a property that it isn't able to find or access on the type that has been returned.

At this point you might be scratching your head wondering why since you're able to access the content and you know exactly what should be in it, especially if you debug the test as you'll see the anonymous object created correctly.

As it turns out the reason you get this error is that when c# constructs an anonymous object at runtime it creates it as an internal class and becauses its internal the project holding the tests will not be allowed to access its properties.

To remedy this all you need to do is to go into the WebApi project and add an attribute to its AssemblyInfo.cs:

[assemblyInternalsVisibleTo("insert the name of your test assembly here")]

doing this now allows your test project to see internal classes in your WebApi project, if you run the test again it will pass.

Code

A repo with code for this can be found here if you want to see it in action

Monday, 12 September 2016

Your agile isn’t my agile old man

Seems everybody and their dog are “doing agile” and they all seem to be complaining about how agile isn’t helping them, in fact for a lot of them it seems to be hindering them.

I’ve wondered about this for a while and it wasn’t until reading Dan North’s recent post “How to train your agile” that it struck me.

I’ve been writing software for just under 20 years and when I started the way projects were managed were completely different from how a lot of people manage them now…

When I was young it was all fields around here

At the end of the 90’s delivery of software was frequently measured in years, projects often cancelled before completion and of those projects that were delivered they frequently had stability issues requiring lots of rework.

This way of working was fundamentally broken and so people set out to find a better way to work.

When they started using agile, delivering in months not years and focusing on ensuring the software worked you can see how radical the change was and this lead to a real change in how software was delivered, it gave businesses:

  • Faster development
  • Better quality software
  • Working software delivered to production

All the things you still hear today associated with agile, what organisation wouldn’t want all of those things compared to what they had?

Whatever grandpa, that’s not how we do it nowadays

The expectation today is working software delivered in weeks not months or years,  an “agile” process isn’t really adding anything here, it’s the norm, it certainly isn’t going to make it any faster, even though people might keep saying it will.

Unfortunately not all environments are  the same and in some “agile” is a dirty word synonymous with pointless meetings and having to work in a hamster wheel of sprints which never deliver what they were supposed to, where each sprint can become a
mini-death march.

Where people are working in this way they don’t see any benefits, all they see is a broken process that they’re forced to follow because the management believe it will mean faster development, better quality, etc.  If all you’ve known is this type of environment then you would most likely think agile should die in a fire and for a lot of devs that have joined our community in the last 6-8 years this is their only experience of agile.

At the other end of the scale there are people working in an environment that makes original agile look antiquated, continuous deployment multiple times a day, focused on adding value for the business, trunk based development with feature toggling, etc. These types of environment frequently, but not always, have a DevOps culture looking beyond development to see how the organisation as a whole effects what they do, and the organisation looks for ways to improve, systems thinking.

Where does this leave us?

When you take a look back at the environment that spawned agile its easy to see where the claims around faster, better quality, etc came from, at the time it was a completely different way to work.  To be fair if you are still delivering multi-month/year projects using waterfall then those original claims are still likely to be true today.

For anybody in environments that look backwards to most agile practices – well done! keep up the good work! Only thing I will say is stay vigilant as it can be all too easy to slip back into bad practices.

If you are working in an “agile” environment, that is anything but agile, you may be able to change what you are doing by using the retrospectives to start conversations around the areas you believe aren’t helping and look at what you can do to change the situation, focusing on the principles rather than the practices (outcome not output, working software, etc) to attempt to improve your situation.

There is almost always ways to improve how you are working and that should be baked into any process you follow, what is frequently missing is the will to change or even try to change.

I firmly believe agile can help you but it does require you to participate so make sure you join in and look for ways to improve how you work, regardless of its “agile” or not.

Wednesday, 30 March 2016

Location of VM when using VMWare & Docker on windows

In my previous post I covered how to get Docker working on windows using VMWare workstation rather than VirtualBox.

One problem I ran into is that by default Docker will create the VM for the host OS in the users directory which usually resides on the C drive.

I try not to have my VM’s on C as I tend to keep that for my main OS and so I wanted to find a way to move where the VM was created.

First method I found for doing this involved me manually moving the vmdk by opening the VM in workstation after it was running, stopping Docker, moving the vmdk and editing the VM details in workstation to set it to the new location.  Whilst this worked it is a complete pain and so I dropped that approach and looked for alternatives.

I then found the create option--storage-path which allows you to specify where the VM should be created, this worked but again I didn’t want to have to specify the location every time, good for flexibility but if I forgot to use it then the VM would end up being created on the C drive.

Then I found there is an environment variable that Docker looks for named MACHINE_STORAGE_PATH which will be used as the root folder when Docker creates the VM it will use as the Host OS.

So on my machine I set it to F:\Docker and then Docker created all the necessary folders (cache, certs, machines) under this folder and the individual VM’s are created in machines folder.

Remember you will need to restart your command/console window once you set the environment variable or it won’t take affect.

Tuesday, 29 March 2016

Docker using VMWare workstation on windows

I’ve wanted to explore Docker for a while now being on windows meant I’d need to use VirtualBox which I didn’t want to do since I already have VMWare Workstation installed.

After a bit of a google I found that there was a driver to use VMWare with docker but no posts describing how to use it.  With a bit of trial and error I got it working and thought I’d share how to do it here.

How to..

Before you start you need to be on Windows 7 or above and have VMWare workstation already installed.

First install Docker machine for windows (you can get it from here), this will install all the normal programs needed to run docker.

Next you want to get the VMWare workstation driver from here, its an exe but you don’t run it you just need to copy the driver into the folder where you installed Docker Machine (usually C:\Program Files\Docker Toolbox).

At this point you’ll be able to run a create command which will build a Docker instance but you won’t be able to talk to the container.  The reason for this is the network adapters that VirtualBox installs will stop Docker talking to the VMWare virtual machine.

To be happy that its all working you can simply disable the network adapters and then any container you create should work.  At this point you can either leave as is or uninstall VirtualBox.

One side effect of this is that you won’t be able to use Kitematic since it only works with VirtualBox on windows.

Is this useful?

Even as I write this post I know that the Docker for Windows beta is coming out but that only uses Hyper-V at the moment.  Docker have said that you’ll still be able to use Docker Machine after Docker for Windows arrives and if you want to use VMWare that this may still be the best option for you.


Thursday, 27 August 2015

NDepend V6

As anybody reading my blog for a while can tell you I like NDepend and a new version was recently released.

Disclaimer: I do not work for NDepend but I was lucky enough to be provided a license to use the software, my opinion in this post is my opinion and nobody else’s.

My first impression of the new version is that there aren’t a huge number of new features in this release rather there has been a number of incremental changes & improvements to various parts of it.

What’s new and changed?

V6 brings with it integration for VS2015 (which V5 didn’t have) and there have been several enhancements around making the experience of using NDepend smoother – better VS integration, VS theming, high DPI support, able to work with portable class libraries etc.

For a long time integrating NDepend into a build process on a CI server needed you to perform additional work to get it hooked up but in V6 its been made easier and there is closer integration with a few CI server technologies, TeamCity being the one I’m most interested in, where NDepend now makes it easy to integrate into the build process.

One area NDepend has suffered with is compiler generated code, such as anonymous types, lambdas, async/await, etc. This has been tackled in V6 so instead of being told  <>f__AnonymousType0<int,string> breaks a rule it now tries to determine if the rule is really broken or if its only due to generated code which will hopefully reduce the number of false positive rule violations.

Additional support has been added around tests so that NDepend can now tell you percentage of code covered by tests and the treemap has had additional colouring added to enable you visualize the code coverage over the assemblies analysed.

My thoughts on V6

As I mentioned before this release feels more of an incremental change rather than a big functional change but that is to be expected with a mature product such as NDepend.

The enhancements around VS integration, VS theming etc are something that the product needed simply to stay where it was i.e. its not a selling point per say rather what people just expect a modern application to support.

Although the test coverage additions seem useful, in practice I find myself using the existing tools I already have to work with this so I’m not sure I’m going to use these new features all that much.

My favourite new feature in V6 is the ability to attach NDepend to a solution using the solution user options (.suo) file rather than the solution (.sln) file. The benefit of this is that because .suo files are not normally stored in version control it means that I can use NDepend on a solution where others in my team don’t have NDepend and it doesn’t cause anybody any issues when loading the solution.

Should you upgrade?

If you’re already using NDepend V5 with VS2013, unlikely upgrade and not impacted by most of the enhancements or only use NDepend from the command line then upgrading may not provide much value.

However, if you are going to be upgrading to VS 2015 and use NDepend inside of VS or in your build progress then I think you’ll want to upgrade to get the latest enhancements.

Wednesday, 29 July 2015

A day of intensive TDD training

Last Saturday (25/7/2015) found me in South Wimbledon attending a 1 day intensive TDD training course being run by Jason Gorman from codemanship.

Although this course is usually run over 2 days codemanship occasionally run the course as a reduced price 1 day intensive format.

I had decided to attend because although I have been trying to do TDD for the past few years I have never had any formal training so wanted to check what I was doing was correct and see what else I could learn about TDD.

Introduction

Once everybody was set up Jason told us what we could expect from the day starting from the most basic TDD techniques progressing through you use TDD to drive the design to using mocks/stubs and finally applying techniques to work out behaviour required from user stories.

Jason pointed out that when you first start doing TDD you need to be mindful of what you are doing to ensure you are following the practices we would be doing that day as it was easy to slip into bad habits, this will mean you will be slower until you become comfortable with the various practices but you would become faster the more you did them.

TDD Back to the beginning

After the introduction Jason took us right back to basics explaining the red-green-refactor cycle and stressing that the refactor part is just as important as the test to ensure you end up with good code.

Jason then had us pair up to write tests/code around transferring money between two bank accounts, in fact all activities during the day were done as a pair which was great as I also got to practice pairing with different people. 

Jason didn’t give us any specific guidance about how we should do this, other than write a failing test, make it pass, refactor – not forgetting to refactor tests as well.

After you had created an object oriented way to handle the transfer Jason got us to go back to just doing it with variables and then refactor that to find and eliminate the duplication, teasing out methods where necessary and giving you insight into

More TDD Basics

After the first exercise Jason talked to us some more about the techniques you use in TDD to help drive the design of the code such as tests only having one reason to fail, always write the assertion first, triangulation, etc.

One thing he stressed was that the idea isn’t to do no design before you code but only do enough design to inform where you start your tests (something we came back to later).

Triangulation was something that really helps with this as Jason explained you want to be driving from a specific solution towards a more general solution, writing meaningful tests which are self explanatory using data that helps tells a story e.g. boundary values.

Another thing that was highlighted out was writing the assertion before you write the code isn’t just to ensure you have a failing test but to make you think about what it is the code needs to do to satisfy that assertion,  and the assertion should be related to helping satisfy the requirement(s).

Armed with this new knowledge we tackled our second example of creating a Fibonacci generator.

Yet more TDD Basics

After the second exercise Jason talked to us about refining what we were doing, isolating tests, never refactor with a red test, test organisation, etc.

Another key point here was to pay attention to the requirements so that you didn’t needlessly create tests that you had to throw away later, this tied back to writing the assertion first to make sure you are writing a test that helps satisfy the requirements.

Jason stressed again the need to importance of maintaining the tests, this means refactoring the tests where necessary to try to ensure they are clear, easy to understand and testing what is necessary (circling round to not writing needless tests)

We then tackled are 3rd and final exercise of the morning having to create a FizzBuzz program with with some additional requirements.  If we completed the initial work Jason challenged us to rewrite the generator so that it didn’t use any conditional statements when generating the sequence i.e. if, switch or tertiary operations.

Test Doubles

The afternoon started with Jason touching on the London & Chicago schools of TDD and how the London school favoured the use of the various types of test doubles to allow testing of interaction between classes.

After discussing the various types of double we jumped into our first example of the afternoon creating a system (in the loosest terms possible) that could tell you if you were able to book a holiday for a specific week of a specific year.

Pulling it together

When we stopped the 4th exercise Jason started talking to us about how we pull all of this together to help design a system.

Users & goals

He talked about determining requirements and how a common way to capture this information was via user stories.  What was stressed though is the user story should be seen only as the starting point for a discussion, hopefully with the user, to get more details around/about the functionality needed.

We utilized the Given…When…Then format to get this additional information with each of the then clauses becoming the requirements to be satisfied. Although this helped us gain further understanding about what was required Jason pointed out that they weren’t executable specifications, to understand specifics we needed concrete examples for each then to allow us to create tests that can satisfy them.

Class Responsibility Collaboration Cards

With this additional info could examine the stories to determine the roles, responsibilities & collaborations that we believe we need and to help us visualize the objects we used Class-Responsibility-Collaboration (CRC) cards where we have 1 card per role (object) listing its responsibilities (methods) and other objects it communicates with (collaborations).

This helps to enable a “tell don’t ask” design where you communicate by message passing  which also identifies boundaries where you can mock the interactions when you come to writing the tests.

Testing the design

We then did an exercise where in a pair we took a user story, created expanded details and added specifics then Jason took our finished work and handed it to another pair to create the CRC’s for it.

Once everybody had completed the CRC’s Jason took one or two and showed us how we could test the design by running through the Given...when…then and seeing if the objects and interactions would successfully satisfy the criteria.

Finishing up

Jason re-iterated what he had said at the beginning that we needed to ensure we practiced the techniques we had learnt today as if we didn’t practice it was easy to slip into bad habits and asked us all to think about what we could do to practice.

We all then went to the pub to have a drink and discuss the day a bit further.

My thoughts on the day

Firstly I can see why this is billed as an intensive course, we really didn’t stop during the day. Ideally I would have liked a little extra time doing the exercises but in general I liked the pace, it kept you focused on what you needed to do without overloading you.

I really enjoyed the course, it reinforced my existing knowledge, highlighted areas where I have picked up bad habits and I came away having learnt more around triangulation and using CRC cards to test specifications.

Whether you are a complete beginner or have been doing TDD for a while I’d recommend this course, I’m sure you would get a lot out of it.