Holistic development and the tale of the unclear requirement

I am a big believer in holistic development, and before I go and present my opinion on this let’s start with the elephant in the room: What exactly does holistic mean ?

Holistic – characterized by the belief that the parts of something are intimately interconnected and explicable only by reference to the whole.

Oxford Dictionary

So, what does this all mean ?

Well, I admit that this might sound a wee bit woo woo to some people, but bear with me for a while. You might end up liking this holistic approach.

The word that I want to focus on is interconnected. Why ? Well most people strongly believe that in order to have a successful software project you need some requirements some software developer / coders / engineers, a manager and everything after falls into place. Starting from this assumption there are a ridiculous amount of click-baity articles on the internet that go to the trouble of telling you why software projects fail.

Here: example search

If you take the time and go through some of them, you will probably get a lot of good and useful advice, and you will mostly see that all of them include the following reasons:

  • Poor communication
  • Unclear requirements

Assuming that you have access to a team that is quite capable technically, you still have 50 – 50 chances to pull it through. Are you ready for the coin flip ?

Not so fast tiger!

Let’s just pause here for a moment and give this some thought. From this two bullet points we can deduce that this has something to do with the attitude and engagement of everybody involved.

Well, let’s pick “unclear requirements“.

What does this mean ? That they were improperly written ? That they were way to vague ? That there were items there that were contradicting themselves ? What actually was the problem, and more importantly when did this became a “problem” ? Who wrote this anyway ? Off with their heads!

Okay… I got a bit carried away there, but we have to admit that this is the way that project postmortems work, someone on the business side comes and asks for a ROOT CAUSE ANALYSIS, with the single intent to find a poor soul that can be made responsible for the current mess, point some fingers and move to the next project where the gods of coin flipping will, hopefully, favor them. But as we will further see is that, more than usually there isn’t one person that is responsible for something, it’s a team effort.

What can we do different to try to increase our chances in the future ? Well let us pick one of the aforementioned questions and have a go at it with some ridiculous, yet oddly plausible, scenarios:

When did the requirements become a problem ?

Yeah, no kidding, when? Because this is the single, most important question that we need to ask and address in order to unlock the other ones.

The point of this question is not to be snarky or look smart, is just to pin point the moment in time when somebody discovered and “communicated that the requirements are not crystal clear. By asking this question we can get one of the following scenarios:

1. Right at the beginning

This is the most glorious scenario, someone noticed the problem, mustered the courage and told everyone that there is a problem here. Hopefully this happened before any estimations were done ( if any ) or before any code was written. Chances are that in this case, the inconsistencies were analyzed and cleared and the coin heavily favors the “Heads” outcome.

If you are in this case you have probably have an environment where people are engaged and feel safe to raise possible issues.

2. In the middle

Well, let’s say that you are ending sprint 7, and one of the epics that was started in sprint 3, and it was estimated for 2 sprints is still in progress ( I know this hardly happens in reality). People start to get antsy, the project manager sends everyone an email invitation for a status meeting. When the mail hits the inbox, you could hear a rather powerful *gulp* sound resonating through the fancy open space office.

What happens next ?

It’s 2 pm, everybody is waiting in the meeting room. More or less most of the team members have knowledge of this problem. Everybody has an opinion, they just don’t really want to talk about it. Finally the PM arrives and the meeting kicks off, after the usual pleasantries that are part of this kind of “escalation” meeting, everybody discusses the problem, starting from the possible misunderstanding of the developer that was working on the problem, to possible implementation issues, to the fact that the requirement itself was contradicting with an already implemented requirement ( feature ). After some productive brainstorming the team goes through a slew of workarounds, alternative implementations, different technical options, and conclude that the actual problem was with the requirement. They formulate a few alternatives to the requirement that will resolve this problem, send it to the customer which admits that the initial requirement was flawed and everybody goes their marry way. What a win!

The managers congratulates everyone for their contribution, praise their ingenuity and tells his superiors about his team capacity of solving problems.

Right ? Well it is all well when it ends well. Is it though ? I mean the coin still has a slight edge toward the “Heads” side but let’s think about how frequent this is ? In all seriousness, how many resources have been spent on a problem that could have been fixed easily in the first scenario ?

Let’s say it is a team of 5 people with sprints of 2 weeks and a 25% allocated manager. The cost breakdown would be something like this: 1 developer worked on this for 4 sprints ( 2 months ) – 40h / week * 8 weeks, all the team participated in a 4 hour meeting (6 people * 4 hours ). This is roughly 344h (man hours) , and this does not include the implementation of the refined requirement epic. Also apart from the cost (paychecks / rent / electricity / free drinks / free coffee ), there are also the frustration of the people involved, just imagine ( or remember ) how is it to spend 8 weeks,aka 2 months, on something that is doomed to fail. Awesome, right ?!

I know, I know … This example is a bit extreme, but think about it, this can happen on a story basis, several times a sprint, and to make matters worse, all this compounds over time, and worse, sometimes this kind of problems don’t get escalated, people start building weird kind of workarounds, dubious implementation, and in worse cases, go along with a pseudo implementation of the contrived requirement which leads to our next scenario.

3. The release disappointment

So in this scenario, the developer managed to sneak in a really *clever* workaround, jumped some hoops, and managed to fulfill the crocked requirement. It does what it says in the epic and stories. The release was delayed a few times, but it’s agile, you know how this works (spoiler: it doesn’t ). Software is a form of art, it takes time (quite a lot), but a few month later, the time for the great release has arrived, the last commit was made, code freeze.

The team has this awesome CI / CD pipeline, branches, unit tests, 99% code coverage on the business logic code, everything works. The code is live. Everybody pat themselves on the back, tomorrow, the client acceptance meeting will be just a formality, everything works, no crashes, just like according to the specs.

The next day, the big moment, all stakeholders are here, the company booked their nicest conference room. The highest ranking person has the awesome opening speech, everybody laughs, everybody cheers. Some QA or PM starts the presentation / walk through … starts presenting the creation. Everything goes as planned, seems the lunch time break might be on time, but and at some point, out of nowhere a resounding “Oh!…” is heard in the room.

Suddenly tension fills the room. Nobody on the development team says anything, hoping for this just to go away … right when the presenter is about to click the mouse and move on, that one guy on the clients team that you never liked, yeah the one in the smart casual sporty suit says something that hits like a ton of bricks:

This is not how we do it!

That one guy!
The aftermath

So, what now ? Well, one of your PM / Sales guys starts taking notes, asking questions and taking more notes, a few days of heated debates, things become a bit more clear, the overall system was good, apart from this one small thing in the core domain. So the sales guy drafts an offer to do the required changes on the companies money, or if the sales guy is really crafty he could argue that the software is according to spec, and in some cases he would convince the client to pay for the necessary work as a change request. Now this is a win, mo`money is always good !

Now, after the 6 month, having a 4 person refactoring project that had to redo the problematic parts of the system and also the parts that were highly dependent on them, everybody is happy!

Yet again, the example is quite naive and simplistic, but you get the point. The client could be internal or external, it does not really matter. The problem is that this was major flaw, be it in the form of a requirement. Now depending on your company’s business model this might even be considered success, but again it sure does not feel like it is.

What can we learn from this scenario ? How did we get here ? At first, the knee jerk reaction is to say that since the requirements were wrong, it’s not the development teams fault. But, unfortunately in the real world there are so many cases were people are so disengaged, afraid to speak up, or even worse, hide behind the “it’s not my job” mantra. This leads to having this type of problems occur quite frequent. They don’t have to be this dramatic, and they may occur on the sprint basis, but again if this happens once every two sprints, they will add up.

4. The everything seems fine

In this scenario, in the meeting room, that guy in the weird suit was playing candy crush on his phone and did not pay attention, nobody else had any comments on the software, everybody is happy. The company writes a new contract for further development and maintenance of the application for the next 2 years.

I believe you see where I am going with this, this is the most insidious and dangerous path possible.

There are so many way on how things can go wrong but here a just a few:

Your boss decides that your team, since you successfully delivered the project from scratch, your team should get on the next new project, and let a more junior team handle the further development of the current software project, since it is stable, and the client was really satisfied.

After a few months of “production time” between all the new functionality the client adds a new story that corrects the initial flawed requirement. Now the new team, trying to uphold the high standards that were set up by the founding team, goes in guns blazing and start implementing the changes. But this seems to be breaking a big chunk of the application. Most of the functionality since then was built on top of that specific requirement, you are in sort of a check mate position.

This takes way longer than anticipated, more resources get pulled in, the overall velocity goes out the window. At some point the team thinks they did it push it live, only to have 20 bugs or so be reported in the first few hours. Now everybody is talking to everybody, escalation after escalation. Two month in, 70% of the time is used to fix bugs. Each time something changes, something somewhere breaks. Since this escalated, nobody had the time to update or add new automated tests, currently the only way to get feedback is by having some QA team do smoke tests, all the time, for each change.

Sooner or later the customer will be disappointed, after that they will decide to switch you for another shop, that will be tasked with the maintenance of this application transforming their lives in this living hell of workarounds, fixes, bugs and weird technical solutions.

Your team, probably will be burn out, demoralized, heck Jim handed his resignation, and after completing an online class opened his own handmade leather goods business. Mary still gets an empty stomach sensation went she gets and email notification from Jira.

Dramatic, I know, but just try to remember the last project that you were tasked to maintain that you were not part of the development. Did you ever wondered why you had to do it ? Why didn’t the people who initially developed it were abandoned ?

Conclusion

In this imagination exercise was just went for one question. This whole chain of events was set from only one bad requirement. Maybe no one really thought about, or really cared about its underlying implication (coders be coding) and took it at face value, maybe someone actually discovered the problem but did not communicate it. Or maybe someone did raised this problem, was discussed in the team but the project manager wasn’t informed, or the client wasn’t informed.

I will let you do this kind of exercise with the other question. I have faith in you.

TLDR; Software development is complex

This is the point of the holistic approach. Building software is a complex system problem. We need to own this, accept this and start paying more attention to details. Almost everything counts, all the small details come together and influence an outcome. In order to try and deliver a successful project, there are a lot of things that we need take into account.

For those that are not familiar with complex systems I’m planning the next post to be a very informal introduction to the concept an also provide some materials (but of course you can always look to cousin google for this).

Processing…
Success! You're on the list.

Azure Functions and the mediator pattern (using MediatR)

Some time ago, I had this really interesting idea, how awesome would it be to use MediatR in functions. I played a bit around, got to the point where I got it to work, but to be honest it felt really really clumsy and awkward, so I kind of let it go. But recently, with the isolated process, things look a bit more promising, still it also feels a bit awkward, but bear with me.

Before we start, if you are not up to speed on MediatR, I recommend CodeOpinions post about it.

Now, the setting up of MediatR is rather simple in isolated functions, I mean :

Program.cs

That is it, no reason to even make a gist for this right ? But this opens up the main question that bothered me so much:

Why in the world would you use MediatR in functions ?

Me, thinking about this …

Now really, we already have quite a decent way of doing “flows” in the form of the durable functions, so why bother with yet another way of abstracting ?

Well, the first and most obvious reason is that, we can do it, and if it is possible, someone will do it right ? But this is not a real reason right ? Good, then why bother writing this article anyway if it is both easy to set up and a rather pointless or unhelpful exercise ? Well, I have another idea though, why we might want to use MediatR in functions.

Portability

Yep, think about this, you already use MediatR in your APIs, and everything is nice and dandy, but you realise that you have an api that actually has only one endpoint that is implemented with MediatR. And rather paying a full web app, or having a vm host this, you could run this almost for free (it depends, but yeah… ) as a function in consumption plan. So, how hard would be this to pull off ?

Actually really, easy. Let’s look into this.

Step 1 – Copy the Handler and Request code

Let’s say you have something like this in your Handler:

MediatR handler

For minimising the length of the post, I will not include the Command and Output definitions, that is rather irelevant in our case (this is part of a future post that will explore building more complex scenarios).

Step 2 – Create the HTTP Handler function

Now, everything that you need to do, is to create a Function Handler (http, or whatever suits you to trigger this), basically, recreate your controller.

The Function block

And to be honest, this is it, now you have the same “pipeline” of resolving HTTP requests, but in a Serverless fashion.

Conclusion

Well, to be honest, I still have rather mixed feelings about this, and I intend to explore this idea into more depth in the next couple of days and see what I can discover and come up with. But to be honest, this looks like a promising alternative to durable functions if the flows are rather “short lived” orchestrations and you don’t think you need the “pausing” functionality that is offered by orchestrators.


If you would like to get notifications about future posts, well you know what to do:

Processing…
Success! You're on the list.

Build dynamic workflows with azure durable functions – Branching logic (part 3)

This is part of a mini-series where we want to build low code product on the shoulders of azure durable functions:

  1. Build dynamic workflows with azure durable functions (NoCode style) – Part 1
  2. Build dynamic workflows with azure durable functions (Low-Code style) – Part 2
  3. Add branching logic to your dynamic workflows with azure durable functions – Part 3 (this)

So far we managed to build a quite a robust solution for having a user configured workflows, but we miss one important feature, giving the users the possibility to configure junction points in the flows. To be a more blunt, give them a construct that would resemble if – else.

So, this is not an easy task, we will require to add a few more constructs to our existing solution (build in the previous two articles) but we will succeed. Now, the solution got a bit more lengthier so instead of the gist approach from the previous article, all the code will be contained in a git repository.

Before we move on, all the ideas and implementation in this article series, is more proof of concept and exploration than production grade code, so be aware.

UI / UX – How to present if else to the user

Before we do a deep dive into the code, let’s take a step back and think about how exactly do we present this new and powerful feature to our users.

If you remember from the past articles, this is how we designed more or less our fictional UI. Now, to present them this branching, I think a good strategy or analogy would be nesting. So, from a UI / design perspective, what actually is happening here is we are creating a flow in a flow, and based on some arbitrary condition that is set dynamically by the user, we will run one or the other.

Now that we have a visual representation for this we could now go in to the main event, how do we manage to pull this off.

As a small remainder this is part of a series of posts, and here I will mostly go through the changes from the previous version.

So, how to do no-code style branching

So, in order to achieve this behavior dynamically we first need to understand and come up with a way in which we will be able to encode the if-else dynamic.

The way I chose to implement this is mostly rudimentary and is inspired by the functional (math), binary style of doing branching. We will have sets of two lists which will contain the steps for each branch called Left and Right, and a condition, if the condition is evaluated to true the left branch will run, otherwise the right branch will run.

From an invocation point of view, we are adding a new orchestrator Branch, to the mix and a new activity that does the actual condition evaluation.

From a call stack, this will be implemented using recursion, since we already have the Dynamic Orchestrator.

Implementing the branching logic

No let’s get to the core of this article, the code.

First as discussed earlier, we need to add the Left and Right lists to the dynamic step class, and also we need to modify the constructor to use these new constructs:

   public class DynamicStep<T, U>
    {
        public string Action { get; private set; }
        public U param { get; private set; }
        public string Fn { get; }

        public List<DynamicStep<T, U>> LeftSteps { get; private set; }
        
        public List<DynamicStep<T, U>> RightSteps { get; private set; }

        public DynamicStep(string action, U param)
        {
            Action = action;
            this.param = param;
            Fn = string.Empty;
        }

        public static DynamicStep<T, U> Branch(string condition, List<DynamicStep<T, U>> leftSteps, List<DynamicStep<T, U>> rightSteps, U param = default)
        {
            return new DynamicStep<T, U>(ActionName.Branch, param, condition, leftSteps, rightSteps);
        }

        [JsonConstructor]
        public DynamicStep(string action, U param, string fn,  List<DynamicStep< T,U>> leftSteps = default, List<DynamicStep< T,U>> rightSteps = default)
        {
            Action = action;
            this.param = param;
            Fn = fn;
            LeftSteps = leftSteps;
            RightSteps = rightSteps;
        }
    }

To be able to test, we added the branch step to the Flowmaker function in order to kick things off. This in theory should be constructed based on a query from a data store.

      [FunctionName("FlowMaker")]
        public static async Task<double> Run([OrchestrationTrigger] IDurableOrchestrationContext context)
        {
            var steps = new List<DynamicStep<double, double>>
            {
                new DynamicStep<double, double>(ActionName.Add, 1),
                new DynamicStep<double, double>(ActionName.Add, 2),
                new DynamicStep<double, double>(ActionName.Add, 3),
                new DynamicStep<double, double>(ActionName.Dynamic, 2, "(2 * r + 1)/p"), // <-- simulate loading for a datasource
                DynamicStep<double, double>.Branch("r % 2 == 0",
                    new List<DynamicStep<double, double>>
                    {
                        new DynamicStep<double, double>(ActionName.Divide, 2)
                    }, new List<DynamicStep<double, double>>() )
            };

            var ctx = new DynamicFlowContext
            {
                Steps = steps
            };

            var result = await context.CallSubOrchestratorAsync<DynamicResult<double>>("DynamicOrchestrator", ctx);
            return result.Result;
        }

We will also need to modify the Dynamic Orchestrator to know how to handle they new step type, Branch, since it is a bit different, for branching it will instantiate a sub orchestrator instead of calling a normal activity. Now I had left it like this to not make things even more complicated to understand, but normally I would use the Dynamic Orchestrator technique.

        [FunctionName("DynamicOrchestrator")]
        public static async Task<DynamicResult<double>> RunInnerOrchestrator(
            [OrchestrationTrigger] IDurableOrchestrationContext ctx)
        {
            var input = ctx.GetInput<DynamicFlowContext>();
            double state = input.State != default? input.State : 0;

            foreach (var step in input.Steps)
            {
                if (step.Action == ActionName.Branch)
                {

                    var result = await ctx.CallSubOrchestratorAsync<double>("Branch", new BranchContext<double, double>()
                    {
                        State = state ,
                        DynamicStep = step
                    });

                    return new DynamicResult<double>
                    {
                        Result = Convert.ToDouble(result)
                    };
                }
                else
                {
                    state = await ctx.CallActivityAsync<double>(step.Action, new DynamicParam
                    {
                        Accumulator = state,
                        Parameter = step.param,
                        Fn = step.Fn,
                    });
                }
            }

            return new DynamicResult<double>
            {
                Result = state
            };
        }

Now that we have all this in place, we can call our Branch Orchestrator which looks like this:

        [FunctionName("branch")]
        public static async Task<double> Branch([OrchestrationTrigger] IDurableOrchestrationContext context, ILogger logger)
        {
            var input = context.GetInput<BranchContext<double, double>>();
            var param = input.DynamicStep;
            var newState = input.State;

            var branchToRun = await context.CallActivityAsync<bool>("BranchAction", new DynamicParam()
            {
                Accumulator = input.State,
                Fn = param.Fn,
                Parameter =param.param,
            });

            if (branchToRun)
            {
                if (param.LeftSteps != null && param.LeftSteps.Count > 0)
                {
                    return await context.CallSubOrchestratorAsync<double>("DynamicOrchestrator", new DynamicFlowContext()
                    {
                        State = newState,
                        Steps = param.LeftSteps
                    });
                }

                return newState;
            }
            else
            {
                if (param.RightSteps != null && param.RightSteps.Count > 0)
                {
                    return await context.CallSubOrchestratorAsync<double>("DynamicOrchestrator", new DynamicFlowContext()
                    {
                        State = newState,
                        Steps = param.RightSteps,
                    });
                }

                return newState;
            }
        }

You can see in the orchestrator, that based on the invocation of the condition, we decide which branch will run. The branch itself will be run using the Dynamic Orchestrator, which opens the possibility to have nested branching without any kind of changes on our code, recursion at its best.

The last piece of the puzzle left is Branch action which evaluates the condition in a similar way with Dynamic Action, the main difference being that this one returns a boolean instead of the result of T.

        [FunctionName("BranchAction")]
        public static async Task<bool> DynamicBranchAction([ActivityTrigger] DynamicParam param, ILogger log)
        {
            var func = new Engine()
                .Execute($"function branch(r, p){{ return {param.Fn} }}").GetValue("branch");

            var invoked = func.Invoke(param.Accumulator , param.Parameter);
            bool.TryParse(invoked.ToString(), out var result);

            return result;
        }

As you can see, the actual implementation is quite trivial, and to be honest, this is what I like about the orchestrator functions framework, is that it frees us to make all kind of interesting constructs.

Although there are quite a lot of code bits here, I recommend you check out the git repository and look at the whole solution.

Hope you have enjoyed our little adventure in the realm of azure functions, if you would like to get informed when a new post is added, join the list 🙂

Processing…
Success! You're on the list.

Build dynamic Linq filters (aka. where() predicates)

As you might have noticed recently, I am in a more “meta” programing mood, to be honest, lately I played a lot with Blazor building some wacky things, and more or less I arrived at the following problem, how can we build a dynamic parameter for a list.Where(item => ??) type clause. Why you ask we might need this ? Well there are a lot of reasons / cases where we might need something like this, but in this specific case, I have a Blazor component that adds some strings to a list and based on that, I want to filter another list. Simple right ?

The “Demo” problem

Let’s say we have a list of strings that we want to filter.

Something along the lines of this:

	var list = new List<string>(){
		"a", "ab", "abc", "abcd", "abcde"
	};

Now, let’s say we would want to be able to pick only the items that contain “c”. We would do something like this.

var normalWay = list.Where(e => e.IndexOf("c") > -1);

An this seems quite straight forward, but let’s say we would also need to check that we also need to have “d”:

var normalWay = list.Where(e => e.IndexOf("c") > -1 && e.IndexOf("d") > -1);

And now, let’s say that we need to do this “c” and “d”, or contain “b”.

var normalWay = list.Where(e => (e.IndexOf("c") > -1 && e.IndexOf("d") > -1) || e.IndexOf("b") > -1);

And you can see this is starting to get a bit out of hand.

Now, in the title we had the “Dynamic” word, so let’s see how can we do this dynamically.

So, lets start simple, we will only take the “and” with the ability of adding multiple requirements:

	var containWay = new List<string>() { "c", "d" };
	
	var withContains = list.Where( e => {
		
		var matchedAll = true;
		foreach(var letter in containWay){
			matchedAll = e.IndexOf(letter) > -1 ?  true : false;
			if(!matchedAll) return false;
		}
		
		return matchedAll;
	});

Now, this works for this specific case, but it is way to specific for my taste, and it only has the “And” / “&&” case. We can do an imagination exercise and see how this will easily go bananas complexity and maintainability wise.

The solution – Dynamic filtering using expression trees

Well, we are not going to build expression trees per se, because, well they are way to complicated for this, and I don’t think that is the best idea, but if you don’t know to much about them, the official documentation is a good place to start.

Now, back to our conundrum, as I said, we are going to use function composition to build our own “home made”, expression filters that will allow us to compose our filters.

Now, I am not going to put you the process of discovery, so here is the class I ended up using and we will be going through how it works after.

public class FilterBuilder<T>
{
	private Stack<Func<T, bool>> stack = new();

	private FilterBuilder(Func<T, bool> filter)
	{
		this.stack.Push(filter);
	}

	public static FilterBuilder<T> Create(Func<T, bool> filter)
	{
		return new FilterBuilder<T>(filter);
	}

	public FilterBuilder<T> And(Func<T, bool> filter)
	{

		var q = this.stack.Pop();


		Func<T, bool> result = (item) =>
		{
			return q(item) && filter(item);
		};

		this.stack.Push(result);
		
		return this;

	}

	public FilterBuilder<T> Or(Func<T, bool> filter)
	{
		var q = this.stack.Pop();


		Func<T, bool> result = (item) =>
		{
			return q(item) || filter(item);
		};


		this.stack.Push(result);
		
		return this;
	}


	public Func<T, bool> Filter => this.stack.First();

	public bool Test(T item)
	{

		return this.stack.First()(item);
	}
}

Hehe, well I guess the first thing you saw, was the stack there, what is the deal with the stack ? We will get to that in a moment, but first let’s see how we can use this wonderful FilterBuilder class.

// case 2
var fb = FilterBuilder<string>.Create(word => word.IndexOf("c") > -1)
					.And(word => word.IndexOf("c") > -1)

// case 3
var fb = FilterBuilder<string>
			.Create(word => word.IndexOf("c") > -1)
				.And(word => word.IndexOf("c") > -1)
			.Or(word => word.IndexOf("b") > -1);

// usage
var result = list.Where(fb.Filter); // or list.Where(wrd => fb.Filter(item) it's a metter of taste 🙂
	

Now, isn’t this cool ? One thing to keep in mind is the composition direction, basically all the operations are processed right to left, and are evaluated after each operation, more or less like math ( order of operations of sorts). I encourage you to put some brake points and have some fun with it.

Now, if you are thinking that this might also go out hand quite quick you could also compose it:

	var list = new List<string>(){
		"a", "ab", "abc", "abcd", "abcde", "bx", "dcba", "dcb", "d",
	};
	
	var fb = FilterBuilder<string>
				.Create(word => word.IndexOf("c") > -1)
					.And(word => word.IndexOf("c") > -1)
				.Or(word => word.IndexOf("b") > -1);
				
	var fb2 = FilterBuilder<string>
				.Create(word => word.StartsWith("a"));

	var fbFinal = FilterBuilder<string>
				.Create(fb.Filter).And(fb2.Filter);

Okay, you admit you like this, but still what is the deal with the stack you ask ?

Well… to be honest it is more or less a small hack, in order to do functional composition, and dynamically compose and evaluate this we had to use some sort of recursion, and “Func” are, well … reference types, and we would have gotten stack traces, so what we did, is use a stack as a buffer, for the new func(func(func()))), and it being a closure would keep all it needs inside its scope, and we just pop the old function from the stack to be garbage collected.

Now, if you are wondering, how exactly is this used dynamically ( build from the database, or from user interactions), here is the example you were looking for, this isn’t production grade, but it might help you get the point:

	// dynamic example
    // this comes from ui / db, serilized however you want
	var containWay = new List<(string, string)>(){
		("and", "c"),
		("and", "d"),
		("or", "b")
	};
	
	FilterBuilder<string> f = null;
	
	foreach(var (rule, letter) in containWay){
		if(f == null){
			f = FilterBuilder<string>.Create(stringRule( letter));
		}else{
			f = rule switch {
				"and" => f.And(stringRule(letter)),
				"or" => f.Or(stringRule(letter)),
				_ => throw new Exception()
			};
		}
	}
	
	var r = list.Where(f.Filter);
	r.Dump(); // did it LinqPad 😀
	
	// helper function for your rules
	Func<string, bool> stringRule(string letter){
		
		return (wrd) => {
			 return wrd.IndexOf(letter) > -1;
		};
	}

So, in this whole post we used just a dummy string list, the beauty of this, is that we use lambdas and we can filter any kind of object, just replace the “string” and ka-boom!

There you have it, a interesting way of building dynamic filters. If you like this kind of content, let me know in the comments.

Processing…
Success! You're on the list.

Build dynamic workflows with azure durable functions (Low-Code style) – Part 2

This is part of a mini-series where we want to build low code product on the shoulders of azure durable functions:

  1. Build dynamic workflows with azure durable functions (NoCode style) – Part 1
  2. Build dynamic workflows with azure durable functions (Low-Code style) – Part 2 (this)
  3. Add branching logic to your dynamic workflows with azure durable functions – Part 3

Before moving on, if you haven’t read part 1, I highly advice you do, this article won’t go anywhere, I promise 😀.

In the previous article, we looked at building an approach to pulling off a “No Code”- like workflow. We looked at how we can build the required data structures and actions, allowing the user to control the workflow order. 

In this post, we will be looking into giving the user the option to add his small bits of code in our rather “complex” workflow. 

Now, keeping our same calculator theme, we can imagine that our previous version was a hit. Still, the people up in finance need some more steps, like calculating the power of the result or applying a 2n + 1 formula to the mix ( don’t ask me why nobody knows what they do 😀 ).

So, how can we achieve this? 

One way of doing this might be to start adding several actions to our flow in the same way that we did so far for the basic operations. But this might end up like a slippery slope since each new addition will require a deployment, and we might end up like a bottleneck. In this case, all the work we did this far might be for nothing, because if we remember, one of the whole reasons for going on this path was to pass some of the responsibilities to the actual stakeholders.

What if we provide the users a simple interface where they could add their own “math”? Wouldn’t that be awesome? I definitely think it would be super awesome, so let’s see how we can achieve it.

First, we will need to update our UI from last time to give the users the possibility actually to do this:

As you can see, there is a new option in the combo box with custom, and we added a textbox where they can add their math 😎.

Now, since we managed to do all of this, let’s see how we will integrate this into our existing code. 

The Code

First, we need to tackle the elephant in the room. How could we do the math part in our code? Well, as the answer to most problems nowadays, Javascript comes to the rescue.

Wait, what? Javascript? Wasn’t this based on C# / .Net? 

Yes, there are actually multiple ways to solve this problem, two of them would be C# script and JS, and both would work wonderfully. Still, in reality, people that will be using this application would be “citizen developers,” so I think it would be easier for them to use the js. Also, there are tons of content on JS.

So, while playing with this idea for the article, after trying various c# script approaches, Rosylin compilers, etc., the JS idea hit me. After a few google searches, I came across this amazing NuGet that will allow us to parse and execute JS in .Net. The package name is Jint, and by the looks of it, it is awesome. So, yeah, to answer this bluntly, we will run JS in our C# code, and yes, we will be breaking the “NEVER EVER USE EVAL()” rule, but we will be careful. I promise.

So, the first change we need to make is to change the structure of the DynamicStep class to be able to include the custom js code: 

    public class DynamicStep<T, U>
    {
        public string Action { get; private set; }
        public U param { get; private set; }
        public string Fn { get; }

        public DynamicStep(string action, U param)
        {
            Action = action;
            this.param = param;
            Fn = string.Empty;
        }

        [JsonConstructor]
        public DynamicStep(string action, U param, string fn)
        {
            Action = action;
            this.param = param;
            Fn = fn;
        }
    }

The fn variable will hold the JS code, next we will need to add the Dynamic option to the list of accepted actions.

    public static class ActionName
    {
        public const string Add = "Add";
        public const string Subtract = "Subtract";
        public const string Multiply = "Multiply";
        public const string Divide = "Divide";
        public const string Dynamic = "Dynamic";
    }

Next we will need the new “data” from the data source in the orchestrator:

        public static async Task<double> Run([OrchestrationTrigger] IDurableOrchestrationContext context)
        {
            var steps = new List<DynamicStep<int, int>>
            {
                new DynamicStep<int, int>(ActionName.Add, 1),
                new DynamicStep<int, int>(ActionName.Add, 2),
                new DynamicStep<int, int>(ActionName.Add, 3),
                new DynamicStep<int, int>(ActionName.Dynamic, 2, "(2 * r + 1)/p"), // <-- simulate loading for a datasource
            };

            var ctx = new DynamicFlowContext
            {
                Steps = steps
            };

            var result = await context.CallSubOrchestratorAsync<DynamicResult<double>>("DynamicOrchestrator", ctx);
            return result.Result;
        }

The next adjustment will be to the sub orchestrator that does all the work:

        [FunctionName("DynamicOrchestrator")]
        public static async Task<DynamicResult<double>> RunInnerOrchestrator(
            [OrchestrationTrigger] IDurableOrchestrationContext ctx)
        {
            var input = ctx.GetInput<DynamicFlowContext>();
            double state = 0;

            foreach (var step in input.Steps)
            {
                // state = await ctx.CallActivityAsync<int>(step.Action, (state, (step.param, step.Fn)));
                state = await ctx.CallActivityAsync<double>(step.Action, new DynamicParam
                {
                    Accumulator = state,
                    Parameter = step.param,
                    Fn = step.Fn,
                });
            }

            return new DynamicResult<double>
            {
                Result = state
            };
        }

Now, the keen eyed among you might have noticed that we added a new parameter type. This was required to reduce the arity of the functions so that we have a common structure for both coded and dynamic actions. Here is the type, noticed fancy:

        public class DynamicParam
        {
            public double Accumulator { get; set; }
            public int Parameter { get; set; }
            public string Fn { get; set; }
        }

And now for the “magic” part, the dynamic function, let’s see it first, and discuss it after:

        [FunctionName("Dynamic")]
        public static async Task<double> DynamicCalculate([ActivityTrigger] DynamicParam param, ILogger log)
        {
            var func = new Engine()
                .Execute($"function dyn(r, p){{ return {param.Fn} }}").GetValue("dyn");

            var invoked = func.Invoke(param.Accumulator, param.Parameter);

            double.TryParse(invoked.ToString(), out var result);
            return result;
        }

So, let’s break this down. The func is how we register the js function definition. Think of it on how you test something in the browser console, the engine instance being the console process. Once we define the function, we call it with the Invoke ( parameters ) part. And after that … well, we cast the result. I think there may be a better way of doing this. As I said, I just discovered this, but this works fine for our purpose here.

Now, I sense a question… why did I define the function and let the user give a simple expression? This decision is based on this “calculator” example, and it kind of made sense to do it like this, since the fictional users are not developers, but more like finance people, and giving them the option to use an excel-like formula seemed to be more appropriate, of course, you could let the user define the whole function if you want.

So yeah, now you have it, your very own “Low-code” workflow. Amazing right ? Did I add to the no code low code craziness, don’t know. So, is this production ready ? No… Is this an idea worth looking into ? Maybe. There are endless business requirements and ways to fulfill them, and this might be the seed, inception of one of those ways.

Now, in reality, we are building a very crude version of Logic Apps / Power Apps, if you think about it, and you might wonder, why bother ? And it is a good question, sometimes you don’t need to build something like this, and you could get away with other existing products, but sometimes you need some custom behavior, that would be way more complicated to build with existing products, so, as always, it depends.

Now, I plan on doing a one more post in this series, but I can’t make any promise on when it will land, but it will be about integrating branching logic in our dynamic “low-code” platform. Right now I haven’t thought about how to do it, but I imagine it would be quite the feature 🙂

If you want to look at the code here is the link to a the gist.

If you want to be notified when the next post will go live … you know what to do.

Processing…
Success! You're on the list.

Build dynamic workflows with azure durable functions (NoCode style)

This is part of a mini-series where we want to build low code product on the shoulders of azure durable functions:

  1. Build dynamic workflows with azure durable functions (NoCode style) – Part 1 (this)
  2. Build dynamic workflows with azure durable functions (Low-Code style) – Part 2
  3. Add branching logic to your dynamic workflows with azure durable functions – Part 3

Some of the hottest buzzwords nowadays are “no-code” and “low code” and to be honest, I’ve been looking into building some kind of platform like this for years. Don’t get me wrong, this is not me saying that I am a clairvoyant and I’ve seen the future in the past, but me saying that I am lazy. I have always been looking to find some ways of pushing some of the application’s responsibilities to BAs / POs and, why not, the actual person using the system. Why do you ask? Let’s be honest here, most of the knowledge transfer that usually happens between domain experts and developers is going through a “translation” process. From a business-focused view to a more tech-focused view, some things get lost in translation as it goes most of the time.


If we manage to write some software that gives people the options to customize at least parts of their daily workflows, I think we have a much greater chance of success. Apart from this, there is also the idea that the only constant is change, and let’s say when a flow must change slightly, in the classic approach, we would need to do a ticket, a redeploy, etc.


If you have read many articles on this blog, you know we will build a ridiculous example to show off this idea. Yes, you are right. To do this, we will be building probably the most complicated simple calculator made ( I am sure there is like an enterprise java edition somewhere that will take the prize, but never the less).


Now you need to be careful because once you get the taste for metaprogramming and dynamic stuff, you are done. It’s no turning back. It’s more or less like opening pandora’s box. You get hooked. Now that I warned you, let’s see what we are building today.

This article assumes you have some working knowledge / experience building azure functions and azure durable functions, if you don’t no biggie, you can find several articles on this blog (like this), or hack even google it. Also, I recommend reading “Dynamically select orchestrators in Azure Durable Functions“.

The usual disclaimer stuff:

Everything you see here is mostly theoretical and has an illustrative purpose and I tried to simplify it as possible and still keep some best practices (ish…), although I use this techniques in production, what you see here totally lacks security, logging, exception handling, tests.

The Intro

Let’s imagine we have a UI like this, think some kind of Angular / React / Blazor, what ever rocks your boat, doesn’t really matter, we will not build this now, it is just for context.

As you can see, we have a drop-down were we pick the action, we have input box for the value and a button to add it to the flow. Then we have a list with all the steps, and then we have a save button at the bottom, quite straight forward I’d say.

Now, what does this have to do with dynamic workflows ? Well, up there we actually are building the workflow. In our case, we actually provided the building blocks like “Add”, “Multiply”, but we are giving the user of the application the freedom to choose on how they interact, meaning the order of the operations.

One more thing before we jump in the code, in this case we assume that we will be always starting from 0.

Good, now that we have all this settled, let’s assume that this data is saved in a storage somewhere in a form like this:

{
    "Name": "Complex Calculation",
    "Version": "1.8",
    "Steps": [
        {
            "Action": "Add", 
            "Param": 5
        },
        {
            "Action": "Add", 
            "Param": 1
        }
...
    ]
}

The coding part

This article if focused on building the back-end for this, so before we do any kind of azure durable magic we first need to setup all the building blocks / capabilities that we want to offer our users. In order to this for this example we are using simple azure functions.

        [FunctionName("Add")]
        public static async Task<int> Add([ActivityTrigger] (int a, int b) numbers)
        {
            return numbers.a + numbers.b;
        }

        [FunctionName("Subtract")]
        public static async Task<int> Subtract([ActivityTrigger] (int a, int b) numbers)
        {
            return numbers.a - numbers.b;
        }

        [FunctionName("Multiply")]
        public static async Task<int> Multiply([ActivityTrigger] (int a, int b) numbers)
        {
            return numbers.a * numbers.b;
        }

        [FunctionName("Divide")]
        public static async Task<int> Divide([ActivityTrigger] (int a, int b) numbers)
        {
            return numbers.a / numbers.b;
        }

As you see, nothing fancy going on here, just some pure and simple math.

Now next, in order to make this at least a bit more professional looking and a bit dynamic we will need a few extra constructs:

    public class DynamicStep<T>
    {
        public string Action { get; private set; }
        public T param { get; private set; }

        [JsonConstructor]
        public DynamicStep(string action, T param)
        {
            Action = action;
            this.param = param;
        }
    }

    public class DynamicResult<T>
    {
        public T Result { get; set; }
    }

    public static class ActionName
    {
        public const string Add = "Add";
        public const string Subtract = "Subtract";
        public const string Multiply = "Multiply";
        public const string Divide = "Divide";
    }

Here we have three lovely classes, which hopefully have quite some descriptive names, and if not yet clear what each does, it will become quite clear in the next snippet.

Next we have the orchestrator that will get the flow data saved from the UI and call the our magic “dynamic” orchestrator.

        [FunctionName("FlowMaker")]
        public static async Task<int> Run([OrchestrationTrigger] IDurableOrchestrationContext context)
        {
            // imagine this comes from an api call or cosmos or something
            var steps = new DynamicFlowContext
            {
                Steps = new List<DynamicStep<int>>
                {
                    new DynamicStep<int>(ActionName.Add, 1),
                    new DynamicStep<int>(ActionName.Add, 1),
                    new DynamicStep<int>(ActionName.Subtract, 1),
                    new DynamicStep<int>(ActionName.Multiply, 10)
                }
            };

            var result = await context.CallSubOrchestratorAsync<DynamicResult<int>>("DynamicOrchestrator", steps);
            return result.Result;
        }

As you can see, so far this looks fairly standard, but now let’s see this magic “DynamicOrchestrator”

        [FunctionName("DynamicOrchestrator")]
        public static async Task<DynamicResult<int>> RunInnerOrchestrator([OrchestrationTrigger] IDurableOrchestrationContext ctx)
        {
            var input = ctx.GetInput<DynamicFlowContext>();
            int state = 0;

            foreach (var step in input.Steps)
            {
                state = await ctx.CallActivityAsync<int>(step.Action, (state, step.param));
            }

            return new DynamicResult<int>
            {
                Result = state
            };
        }

I know right ? Kinda disappointing, probably you were hoping to find something very complicated and quite “smart” here, and yet, all you get is this lousy “for each” …

Now, in all seriousness, this is what I actually like about the in process model of the durable function frameworks. It allows you to build quite interesting constructs quite easy.

The conclusion

I’m hope you found this quite an interesting idea, but before you go full bananas there are some small issues that you should know about, or at least on big issue. Right now, there is no support for generic azure functions, which does complicate things a bit. What this means is that you will need to actually create a “magic” orchestrator per return type. But other than that I think this is quite an awesome trick.

This time, I did not create a git repo, but if you want to see the code, I put everything together in a gist right here.

If you liked the content and would like to be notified …. well you know the drill…

Processing…
Success! You're on the list.

Interviewing developers: To FizzBuzz or not to FizzBuzz

I had held many interviews in the last years, and I never asked anyone to do a whiteboard coding or code something in the interview, nor I had anyone do screening with a hacker rank style platform. After the interview, what I occasionally do, if I consider it makes sense, I ask participants if they would like to do a small home assignment (usually something that should not take more than 30 minutes to an hour).

Before we get all worked up about anything, let me just say that interviewing and recruiting are incredibly complex topics, and the following post is more of a brain dump.

Now that you are a bit familiar with my modus operandi, we can start unpacking this for the reasons why I think this leads to the best outcomes with mid to senior-level positions for business-oriented software. 

First and foremost, let’s address the elephant in the room, the question of what I think or why I think there is a problem with this kind of practice.

But first, a story

If you are a reader of this blog, you should be expecting this, there is no getting around stories, I mean, this is written so you could skim or jump over it, but I think this is quite important to the whole narrative.

Some time ago, 2 -3 years I think, I came across this cool and nifty platform called codingame.com, and I randomly picked this nice little challenge.

Quote: “The goal of the problem is to simulate an old airport terminal display: your program must display a line of text in ASCII art.”

And as you can imagine, I gave this a shot, and to be honest. It was quite fun. I finished it, passed the tests and the hidden tests, and then the most exciting thing happened, I got access to other people’s solutions, ordered by votes (gamification, yey!). Of course, as you would imagine, I clicked directly on the top one, and oh boy, I got floored. It was at that very moment that something clicked. I found out why this approach, although fun for leisure and practice, this practice is so bad for evaluating someone, especially in time-constrained, high pressure, and high stakes scenarios. But, we are getting ahead of ourselves, back to the story.

If you took the time to check the challenge, awesome. If not, this is the gist of it.

You get an ASCII art font like this :

   __ ___ ___ ____ ____ __ ____ ____ 

  /__\ / __) / __)(_ _)(_ _) /__\ ( _ \(_ _)

 /(__)\ \__ \( (__ _)(_ _)(_ /(__)\ ) / )(  

(__)(__)(___/ \___)(____)(____) (__)(__)(_)\_) (__) 

You also get the width and length of a char of that font, and you need to write strings in ASCII. Quite straightforward, since this is not about solving the challenge itself, we will not spend too much time on the specifics.

The way I solved this was this: 

class Solution
{
    static void Main(string[] args)
    {
        int L = int.Parse(Console.ReadLine());
        int H = int.Parse(Console.ReadLine());
        
        var writter = new Writer(H,L);
        
        string T = Console.ReadLine();
        for (int i = 0; i < H; i++)
        {
            writter.PopulateLetters(Console.ReadLine());
        }
        writter.ToAsciiString(T, true);
        
    }
}
class Letter
{
        public int height { get; private set; }
        public int width { get; private set; }
        public int startPoint { get; private set; }
        public List<string> AsciiMatrix;
        private Letter() { }
        public Letter(int h, int w, int s)
        {
            AsciiMatrix = new List<string>();
            height = h;
            width = w;
            startPoint = s;
        }
        public void AddRow(string row)
        {
            AsciiMatrix.Add(row.Substring(this.startPoint, width));
        }
    }
class Writer
{
        public Dictionary<char, Letter> dict = new Dictionary<char, Letter>();
        private string charList = "ABCDEFGHIJKLMNOPQRSTUVWXYZ?";
        private int height;
        public Writer(int h, int w)
        {
            charList
                .ToCharArray()
                .ToList()
                .ForEach(c => dict.Add(c, new Letter(h, w, charList.IndexOf(c) * w )));
            height = h;
        }
        public void PopulateLetters(string row)
        {
            foreach (KeyValuePair<char, Letter> entry in dict)
            {
                entry.Value.AddRow(row);
            }
        }
        public List<string> ToAsciiString(string s, bool print)
        {
            var searchString = string.Join("" , s.ToUpper().ToCharArray().ToList().Select(x =>
            {
                return charList.Contains(x) ? x : '?';
            }).ToList());
            
            List<string> buffer = new List<string>();
            for (int i = 0; i < height; i++)
            {
                List<string> lineBuffer = new List<string>();
                searchString.ToCharArray().ToList().ForEach(e =>
                {
                    lineBuffer.Add(dict[e].AsciiMatrix[i]);
                });
                
                buffer.Add(string.Join("", lineBuffer));
            }
            foreach(string row in buffer){
                Console.WriteLine($"{row}");
            }
            return buffer;
        }
    }

Keep in mind that this, in my opinion, is straight of the top of my head, throw-away code, nothing fancy, nothing revolutionary.

Now, back to the story, after submitting your solution, you have the chance to see how other people solved it and vote, and naturally, as I said, I was curious how other people solved it and picking the most voted solution I saw this:

class Solution
{
    static void Main()
    {
        int L = int.Parse(Console.ReadLine());
        int H = int.Parse(Console.ReadLine());
        string Sentence = Console.ReadLine().ToUpper();
       
        for (int i = 0; i < H; i++) {
            string AsciiRow = Console.ReadLine();
            foreach (char letter in Sentence)
            {
                int index = char.IsLetter(letter) ? letter - 'A' : 26;
                Console.Write(AsciiRow.Substring(index*L,L));
            }
            Console.WriteLine();
        }
    }
}

Boom! Wait, what? I thought this is interesting. Let’s see how other people solved it… and amazingly, most of the solutions were along the same lines, people solving it in a few lines of code, some more elegant than others. But the theme was short and sweet.

At that moment, I started thinking, what is wrong with me? Since most of the solutions were in this different style, why did that not occur to me? 

The Epiphany

I have to admit that this got stuck in my head for a while. I was jealous. I wanted to do it also like the cool kids, and as you can imagine, I started doing a lot more challenges on multiple platforms. And yes, I was living the dream, was writing one-liners like a boss. And then something strange happened, while working on real projects, I caught myself doing this, and then I started to understand what was going on.

You see, all these platforms are designed to measure specific implementations for specific problems (based maybe efficiency/speed). In theory, there is nothing wrong with that, if we look at it from a purely technical perspective, strictly from a coding standpoint. With experience, you start to realize that some other “non-functionals” are quite significant, sometimes even more important than speed, whit, and technical prowess. Usually, the software is built by teams, and if team members have varying approaches to the same problems, you might not find the success you are hoping for.

We need more chefs and fewer recipe book users.

Dave Snowden – DDD Conf 2018

Let’s unpack this. Looking at the two code snippets above that practically do the same thing, what difference do you see? 

I have to admit that it is a weird question, so let me rephrase that:

When landing on a new project, say you need to update the rendering direction from left to right to top-down per column.

Which code would you like to find staring back at you?

So now, maybe you think that I am mean because this is just a challenge, throw-away code, and no one does this on real systems, right? 

Well, as I already mentioned, I tended to do this. And and probably more people do it; it’s mostly how our brain works regarding problem-solving and patterns of thinking.

Or you could say that according to TDD, the second approach is the right one, and the first is premature optimization, and you might be right. Still, we set the context at the beginning, businessy software, so in my mind, software should also encapsulate some business knowledge.

The problem 

Doing code challenges might not be a big problem by itself, but we can see the lurking danger if we start looking at it holistically. The reason behind this thought is, I think, an optimization problem. If we begin measuring these, people will start optimizing these particular skills to the detriment of others. Although knowledge of algorithms is essential, it is not as critical to spawning a quite lucrative sub-industry of people giving training and mentorship for passing these tests. In the end, we might end up with people excellent a solving fizz buzz questions but lack the experience and knowledge of actually building software.

To put it bluntly, when there is a system, people will game the system.

Also, I saw people argue that these tests show how candidates think. Still, for an optimization problem you see for the first time, with 30 minutes to solve, it only shows you have solved something similar in the past. Most of these tests have a specific implementation for which the tests were written anyway. You either get it or not, and you might write off someone with high potential due to the fact that they did not encounter that situation before or, for some reason, couldn’t fully concentrate in that roughly short time window. 

Another argument is that this is “best practice”. Well, what exactly does best practice mean ? The way I see, some thing to be considered best practice, is something that most people do, like a common agreed standard. But on the hand if everybody does something, does it means is good for you ?

Discovering what is important

I think this is one of the biggest conundrums that we have in the industry right now. What is important in software development? To only right/correct answer would be: 

It depends…

Consultant

I can’t tell you what might be important to you, but I can tell what I consider important when looking for new colleagues. 

When building teams I look for the following things:

  • a well rounded team with people that complement each other, let’s say for a team of 4 – 6 developers, is 3 are super technically oriented, I would try to find at least 2 whole are more process oriented, this way we avoid building “echo chambers”.
  • usually I look for team players rather than “rock stars / lone wolfs”, people that usually respond to question with “we” instead of “I”
  • people who have an openness to learn and adapt
  • people with the right know how

Now that we have this out of the way, let’s focus a bit on this “it depends”. 

It is all about what scenario you are in and what exactly you are looking for. 

There are different needs, and in some cases, it makes sense to have this as part of your process. Say like having to staff 40 developers to transform specs into code, factory style. On the other hand, if you are looking for experienced people who also need to do software design, coach people, negotiate technical solutions with other teams or requirements with stakeholders, this approach might not be the most lucrative.

As a takeaway, I think you should be very careful with global, one size fits all processes and try to discover your actual needs and create custom processes for specific roles. 

Processing…
Success! You're on the list.

Complex? Complicated? Who knows … Who cares ?

Well, actually you should. The difference between a complex and a complicated problem/system makes all the difference, starting from the approach, the way you measure the success and of course the probability of reaching an outcome.

Last article I mentioned the “complex systems”, what is the deal with them? Well, before we go there, we first need to go through what each of these things actually mean.

So, systems. We hear this quite frequently many times a day. So many times we hear terms like the “subway system” or the “train system” or better yet the “Software system”. So what is this system?

A regularly interacting or interdependent group of items forming a unified whole.

Merriam-Webster Dictionary

So, in this case, I believe that intuitively this is quite easy to grasp, and frankly, I don’t really think that this requires much more explanations. So let’s move on.

Complex or Complicated?

Good question. And as you imagine there is not n easy answer here, I mean there are books written on this subject, but I will do my best to point out the main difference between the two, of course in the context of systems.

First the definitions:

 Complicated – consisting of parts intricately combined, difficult to analyze, understand, or explain 

Merriam-Webster Dictionary

Complex –  a whole made up of complicated or interrelated parts 

Merriam-Webster Dictionary

Okay, so to illustrate the difference, we will go through a few examples.

A complicated piece of machinery, let’s say you laptop, or pc the smartphone that you are reading this on, is it complex or complicated? 

Well, it is complicated. 

The traffic on our commute, is it complex or complicated? 

Complex.

Why ? Well, the main reason is “causality”. In a complicated system you can establish causality, You could, or better said a subject matter expert, can properly identify each component and the effect it has on the whole system. So, in the case of a smart device, we can assume that if we need it to run faster we can add some more ram. On the other hand, for a complex system, we can have an idea of how it works, we can not really know how the system will react with certainty. For example, for the commute traffic, there are a myriad of possible things that can influence your time to work. A garbage truck, an accident, weather conditions. But more than anything, and this is more or less why the traffic is a real problem, and particularly hard to optimize ( hence the joy of commuting), is that people interact in it, and people are highly adaptable, so one can not know for sure how the traffic will react to let’s say a new round-about or a new stoplight. 

As an exercise, what kind of a system would a middle school kids party be? 

Great, so what? 

Well, although as interesting and mind-stimulating this kind of exercise might be, one good question would be “to what end?” or as some of our younger acquaintances would argue “why bother ?”

Well, in the field of software engineering it turns out to be quite important to know the level of complexity ( or rather the type of system) of your problem space. Depending on where you are, it makes a world of difference in your approach to the software development practices, patterns, architecture … well actually everything is impacted by the type of problem you are trying to solve, and if you pick a bad approach, well the consequences are quite dire. 

Remember from the last article, that most software projects fail ? Well, most of the blunders are correlated to a wrong approach and miss categorization of the problem domain.

Conclusion

So, hopefully, now that we cleared a bit what the differences between complex and complicated are, we could start looking into more meatier (is this a word?) discussions. In the next post I will introduce you to the cynefin framework, which is a very interesting concept, “a sense-making” framework developed by Prof. Dave Snowden (Nope, it’s a different Snowden), that helps us navigate each of the states. 

Processing…
Success! You're on the list.

4 ways of using Feature Flags like you actually know what you’re doing!

In the last article you saw how to add feature flags to your azure function application. There was just one simple dummy example on how to use feature flags, in this article we are going to look at a few ways where the feature flags might bring you value.

So without spending to much precious time let us dig in.

Avoiding the “upsies” ( aka. continuous delivery, while sleeping well at night)

The way this works, is quite simple, each time you start developing a new feature, you guard it with a feature flag. This ensures that, if your code compiles, and the feature is spread into multiple stories, you could push everything to your main branch, without the risk of breaking everything. Well, in theory at least. After you finished developing all the stories, and everything is pushed, you can just flip a switch, and boom!, the feature becomes active, no fireman or cowboy hat required whatsoever

Let someone else do it (aka. planned releases)

Wouldn’t it be great if you could just do your deployment for the upcoming big “release” a few days before the actual event and not have to be synchronized with this (well this implies that the development is done a few days ahead, and all the testing was also completed, so this is applicable where possible) ? That would basically mean, that you do not have a very tight schedule for your release, you would do it in your own way and time. But how would you actually achieve this ?

Well let’s say you have this shiny new feature that everybody is waiting for, and the product manager announced that it will we be rolled out on Tuesday afternoon. In the regular way , the way you would go about it is that immediately after lunch you start the deployment process, if you are using blue / green or slots, push everything there, and when you get signal, you swap the slots, if you don’t ( you should start doing it, really … please use slots!) than, this is even more tricky to pull off, you need to know in advance how much time it takes to actually do the deployment, and start precisely so you will do it at that specific time, while people will anxiously click refresh on the page, not fun, not fun at all.

The cool way of doing this: when the story is done and tested, you push everything on master, with the feature flag on the prod environment set to off / false. Boom, your finished. You don’t need to be involved any further. Someone, can then just go in the feature flag app, and turn it on when the time comes. Awww yeeaah!!.

ProTip: Some of the more advanced feature flags tools also have something like planned activation 🙂

Word of caution: You should NEVER use feature flags when you are doing non-backward compatible database changes. Now that I am thinking about it, you shouldn’t do non-backward compatible database changes anyway, but here it is, you have been warned!

Achieving the singularity (aka. synchronize rollout in a distributed application)

One of the “few” challenges that you face developing distributed application, is that it is quite difficult to synchronize the successfully rollout new functionalities that cross multiple boundaries. I know, in your ideal project, nothing gets out of bounds, but some of us are not that lucky, and here is how we might be able to achieve this more enlightened state of synchronicity. Like in the above examples, use the feature flags to wrap all the functionality. Make all of them be based on the same feature flag. Again, deploy everything in a neat and timely fashion, and let someone else toggle the feature flag.

Step by step ( aka. Hierarchal feature flags)

Although, most of the feature flag management apps to not have a way to create hierarchical flags, there is nothing there to stop us doing this. One simple way of doing this would be to create a naming convection something like big-feature, big-feature.functionality1, big-feature.functionality2. This could help you gradually roll out functionality, and also have a much faster feed back loop in the dev – test/QA cycle. Meaning, that your testing people could be able to validate as you move from one story to the other and not having to do a “BIG BANG RELEASE” style of testing with 2 hours left before the deadline.

Word of caution

Now, as cool as they might seem, the feature flags are not a silver bullet, and will not fix all the issue of the world, but they do help provide some nice possibilities if used properly and responsibly. Also, I’m quite sure that for you to be able to successfully use feature flags, you will need to change a bit the way you are writing your code, in a m less procedural, more flow based way, and in some cases, you might have some duplication between the feature flags flows.

Duplicate code

That is okay, as long as you are using it responsibly and after the validation of the new features, you go and remove all the guarded feature flags, and only leave the final version. If you fail to do this, than … well the “broken windows” principle kick in, and you end up in a big ball of mud.

If you liked the content, consider joining the list to get notified 😉

Processing…
Success! You're on the list.

Using feature flags in Azure Function and Durable Functions with Azure App Configuration

Well, well, well, you are looking to up your game! Then let’s do this. Let’s discuss feature flags, but first what are feature flags ?

Well, feature flags are a technique which enables you to be able to influence how your application works actually touching the code, but don’t take my word for it, check what Martin Fowler has to say about them.

We have already looked at this in some way in Inject configuration values in Azure Functions. In that article you can see how you can load environment variables in you application. Based on those you can influence how your application behaves. But that has a downside though, every time you change the app config in you azure function, the application gets restarted, not ideal… and if this wasn’t enough, this is restricted to the current application, so as you might imagine, you can not read the contents of an app config of a different application, bummer…

Why bummer ? Well, aren’t we living the the age of microservices ? Don’t we wont to have a gazillion of independently deploy-able, self healing, reactive, owners of their own data, breaker of chains nano-services ?

Well, in this day and age, when we are doing web-scale, most of the time, some feature would span across multiple micro services and UIs, so in this case, using the local app config in each of them would be quite tiresome and quite error prone. We need a better solution for this. Lucky there are quite a number of solutions out there, one of which is also in azure ( how convenient ), and we will take a quick look at it. The name is *drum roll* Azure App Configuration !!!111

Yeah, shocking … well, this is what were will be looking at today. Well, you at least. So, in order to get started please visit the official quick start guide, and after you finished going through it, I’ll eagerly wait for you back here to go through a quick and nice example. Also, at the time of writing this, the documentation was a bit out of date, and I had to do some things here and there to fix it, things that will be shown below.

The Code

Well now that you went through the quickstart ( hopefully ), you should have your own app configuration endpoint as well the connection string in you environment variables.

Now, a few more things that we need to add, as you might notice in the quick start, they are using the annoyingly static type of functions, we will use the more class based and DI / IOC way of doing things.

Also, as usually you can find the code on github.

Okay, in order to get the ball rolling, we need to initially setup the Startup file as such:

[assembly: FunctionsStartup(typeof(Startup))]
namespace FunWithFlags
{
    public class Startup : FunctionsStartup
        {
            private IConfiguration configuration;
            public override void Configure(IFunctionsHostBuilder builder)
            {
                this.configuration = new ConfigurationBuilder()
                    .AddEnvironmentVariables()
                    .AddAzureAppConfiguration(options =>
                    {
                        options.Connect(Environment.GetEnvironmentVariable("ConnectionString"))
                            .UseFeatureFlags();
                    }).Build();
                ServiceRegistrations.ConfigureServices(builder.Services, this.configuration);
            }
        }
}

Here we setup the usual start-up part, then we need to setup the ServiceRegistrations class

    public static class ServiceRegistrations
    {
        public static void ConfigureServices(IServiceCollection builderServices, IConfiguration configuration)
        {
            builderServices.AddFeatureFlags(configuration);
        }
        private static IServiceCollection AddFeatureFlags(this IServiceCollection serviceCollection, IConfiguration configuration)
        {
            serviceCollection.AddSingleton<IConfiguration>(configuration).AddFeatureManagement();
            serviceCollection.AddAzureAppConfiguration();
            return serviceCollection;
        }
    }

So, the essential parts are highlighted, nothing new, just adapted to the DI style functions.

After setting the usual DI container we have the following function code:

    public class ShowShipFlag
    {
        private readonly IFeatureManager _featureManager;
        private readonly IConfigurationRefresher _refresher;
        public ShowShipFlag(IFeatureManager featureManager, IConfigurationRefresherProvider refresherProvider)
        {
            _featureManager = featureManager;
            _refresher = refresherProvider.Refreshers.First();
        }
        [FunctionName("ShowShipFlag")]
        public async Task<IActionResult> RunAsync(
            [HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)]
            HttpRequest req, ILogger log)
        {
            await _refresher.TryRefreshAsync();
            var shipFlag = "The India Company";
            var usePirateShip = await this._featureManager.IsEnabledAsync("pirate-flag");
            if (usePirateShip)
            {
                shipFlag = "Pirate";
            }
            return (ActionResult) new OkObjectResult($"The ship has a {shipFlag} flag ");
        }
    }

Now, looking back at this and at what you have achieved in the quick start, you might have noticed something new here, the ” refresher” bit, you know the one that is highlighted. This actually makes the magic happen, and forces the feature flags to update the value, otherwise, you use only the initial value that gets cached for the lifetime of the app. Now, what you need to keep in mind is that although you are attempting to refresh, this is done by default in a 30 sec caching window, so once you flip the switch, you might have to wait a bit until you get the desired result, but this could be changed to different values in the Startup file.

As you have seen this can be used inside an activity and can also be used inside an orchestrator

    public class ShipDefenseOrchestrator
    {
        private readonly IFeatureManager _featureManager;
        private readonly IConfigurationRefresher _refresher;
        public ShipDefenseOrchestrator(IFeatureManager featureManager, IConfigurationRefresherProvider refresherProvider)
        {
            _featureManager = featureManager;
            _refresher = refresherProvider.Refreshers.First();
        }
        [FunctionName(nameof(ShipDefenseOrchestrator))]
        public  async Task<List<string>> Run(
            [OrchestrationTrigger] IDurableOrchestrationContext context)
        {
            var actions = new List<string>();
            await this._refresher.TryRefreshAsync();
            var isParanoid = await this._featureManager.IsEnabledAsync("IsParanoid");
            var flag = await context.CallActivityAsync<string>(nameof(CheckOtherShipsFlag), new { });
            if (flag == "Pirate")
            {
                actions.Add(await context.CallActivityAsync<string>(nameof(PrepareDefensiveManeuvers), null));
                actions.Add(await context.CallActivityAsync<string>(nameof(FireCanons), null));
            }
            if (isParanoid)
            {
                actions.Add(await context.CallActivityAsync<string>(nameof(PrepareDefensiveManeuvers), null));
            }
            return actions;
        }
        [FunctionName(nameof(CheckOtherShipsFlag))]
        public async Task<string> CheckOtherShipsFlag([ActivityTrigger] object obj)
        {
            await _refresher.TryRefreshAsync();
            var shipFlag = "The West India Company";
            var usePirateShip = await this._featureManager.IsEnabledAsync("pirate-flag");
            if (usePirateShip)
            {
                shipFlag = "Pirate";
            }
            return shipFlag;
        }
        [FunctionName(nameof(PrepareDefensiveManeuvers))]
        public async Task<string> PrepareDefensiveManeuvers([ActivityTrigger] object obj)
        {
            return "To battle stations!";
        }
        [FunctionName(nameof(FireCanons))]
        public async Task<string> FireCanons([ActivityTrigger] object obj)
        {
            return "BOOM!";
        }
        [FunctionName(nameof(IsParanoid))]
        public async Task<bool> IsParanoid([ActivityTrigger] object unit)
        {
            await this._refresher.TryRefreshAsync();
            return await this._featureManager.IsEnabledAsync("IsParanoid");
        }
        [FunctionName("ShipDefenseOrchestrator_HttpStart")]
        public async Task<HttpResponseMessage> HttpStart(
            [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post")]
            HttpRequestMessage req,
            [DurableClient] IDurableOrchestrationClient starter,
            ILogger log)
        {
            // Function input comes from the request content.
            string instanceId = await starter.StartNewAsync(nameof(ShipDefenseOrchestrator), null);
            log.LogInformation($"Started orchestration with ID = '{instanceId}'.");
            return starter.CreateCheckStatusResponse(req, instanceId);
        }
    }

Quite straight forward. Now, I know this is quite a simple example, but quite a powerful one. Just imagine you have multiple function apps and other apps working together and having a new functionality deployed, that could be enabled or disabled at a flip of a … well checkbox, sounds cool right ?

Also, even better, you could combine these feature flags with the technique described in Dynamically select orchestrators in Azure Durable Functions, and you could get quite spectacular results.

Also, in case you missed the link to github: https://github.com/laur3d/FunWithFlags

Now, if you need a few more examples on how you could use feature flags check this article: 4 ways of using Feature Flags like you actually know what your doing!

If you liked the content, consider joining the list to get notified 😉

Processing…
Success! You're on the list.

Writing unit tests for orchestration functions in Azure Durable Functions

In this article we will look into writing unit tests for Orchestrator functions using xUnit and Moq. We will write tests for a mildly complex orchestrator with branching logic.

So, unit testing…

Before we start, I have to admin, I for one, am not a very big fan of unit testing, I mean, I strive to always 100% coverage for core domain tests, but I seldom test any infrastructure code, and if we are talking about UI’s I basically only write some minimal sanity checks.

The reason for this approach is that during the years, I came to the conclusion, that not all tests are equal, some provide way more value, and some just create noise and are in constant need of updating. For example, updating a major UI framework version, will usually break all tests. Anyway, I always had the luck of working with QA engineers that were doing Selenium tests, so no biggy :).

Never the less, one thing to keep in mind is that TESTING IS IMPORTANT!

Testing Sagas / Orchestrators

Looking at the process of testing orchestrators, at first glance it looks like one of the most useless things that you could be testing.

At least this is what I initially thought when I started my serverless journey, but then I came to realize that the unit tests of the orchestrators could provide an amazing amount of value.

But let’s not get ahead of ourselves, and let’s start the testing.

So, we will be starting with the following orchestrator logic :

We have an orchestrator that will do the following things:

  1. Check if we deliver to the specified address
  2. Get the proper suborchestrator to calculate the shipping cost based on the partners in the region
  3. Invoke the proper suborchestrator to get the right price
  4. Send an event to a bus for marketing / sales

I know, it is a bit far fetched, but this rather contrived example has what we need to demo the unit tests, to be more exact, we have branching logic, we have a suborchestrator and we have an activity that does not return anything ( believe me, there will be a problem with this one, but you will see how to handle this).

The code

The orchestrator that we will be writing the unit tests looks like this:

 public static class SagaToTestOrchestrator
    {
        [FunctionName("SagaToTestOrchestrator")]
        public static async Task<ShippingPrice> RunOrchestrator(
            [OrchestrationTrigger] IDurableOrchestrationContext context)
        {
            var input = context.GetInput<SagaContext>();

            // activity to check if we ship to the specified continent
            if (!await context.CallActivityAsync<bool>("IsContinentSupported", input.Continent))
            {
                return new ShippingPrice()
                {
                    Shippable = false,
                    Message = "We aren't able to ship to your location"
                };
            }

            // activity to get proper orchestrator for continent for shipping partner
            var supplierOrchestratorToRun = await context.CallActivityAsync<string>("GetSupplierOrchestratorForContinent", input.Continent);

            // orchestrator to get the price for the shipping address
            var priceForShipment =
                await context.CallSubOrchestratorAsync<decimal>($"{supplierOrchestratorToRun}Orchestrator", input);


            // activity to publish event for Sales / marketing
            await context.CallActivityAsync("PublishCalculatedPriceActivity", (input, priceForShipment));

            return new ShippingPrice()
            {
                Shippable = true,
                Price = priceForShipment
            };
        }

        [FunctionName("CourierAOrchestrator")]
        public static async Task<decimal> CourierAOrchestrator([OrchestrationTrigger] IDurableOrchestrationContext context)
        {
            return 100;
        }

        [FunctionName("CourierBOrchestrator")]
        public static async Task<decimal> CourierBOrchestrator([OrchestrationTrigger] IDurableOrchestrationContext context)
        {
            return 120;
        }

        [FunctionName("IsContinentSupported")]
        public static async Task<bool> IsContinentSupported([ActivityTrigger] string continent, ILogger log)
        {
            var supportedContinents = new List<string>
            {
                "North America", "South America", "Europe",
            };

            return supportedContinents.Contains(continent);
        }

        [FunctionName("GetSupplierOrchestratorForContinent")]
        public static async Task<string> GetSupplierOrchestratorForContinent([ActivityTrigger] string continent, ILogger log)
        {
            var courier = "";
            switch (continent)
            {
                case "South America":
                case "North America":
                    courier = "CourierA";
                    break;
                case "Europe":
                    courier = "CourierB";
                    break;
            }

            return courier;
        }

        [FunctionName("PublishCalculatedPriceActivity")]
        public static async Task PublishCalculatedPriceActivity([ActivityTrigger] (SagaContext context, decimal price) input, ILogger log)
        {
            log.LogInformation($"{input.context.Continent}: {input.price}");
        }

        [FunctionName("SagaToTestOrchestrator_HttpStart")]
        public static async Task<HttpResponseMessage> HttpStart(
            [HttpTrigger(AuthorizationLevel.Anonymous, "post")]
            HttpRequestMessage req,
            [DurableClient] IDurableOrchestrationClient starter,
            ILogger log)
        {
            // Function input comes from the request content.

            string instanceId = await starter.StartNewAsync("SagaToTestOrchestrator", null);

            log.LogInformation($"Started orchestration with ID = '{instanceId}'.");

            return starter.CreateCheckStatusResponse(req, instanceId);
        }
    }

    public class ShippingPrice
    {
        public bool Shippable { get; set; }
        public decimal Price { get; set; }
        public string Message { get; set; }
    }

    public class SagaContext
    {
        public string Street { get; set; }
        public string City { get; set; }
        public string Country { get; set; }
        public string Continent { get; set; }
    }
}

So, in order to try to keep things as simple as possible, I went with the static approach, and also kept everything related to this orchestrator in one file. If you are interested in how to set-up a more structured Durable Function project you could read this article.

Now, this is not what you came here for, so on to the testing part.

Let the testing begin!

So, unit testing, like most of all things in software development could be done in a myriad of ways. Most of them will amount similar results. Now, here we will show the way described in the official MS Docs, after I’ll show how I usually like to test the orchestrators.

The way of the docs

Although, technically the way described in docs and the way we will end up are quite similar, the idea behind is quite different.

So, after reading the documentation this is the first test functions:

        [Fact]
        public async Task CalculatePriceForAmerica()
        {
            // Arrange / Given
            var orchContext = new SagaContext
            {
                Continent = "North America"
            };
            var context = new Mock<IDurableOrchestrationContext>();

            // mock the get input
            context.Setup(m =>
                m.GetInput<SagaContext>()).Returns(orchContext);

            //set-up mocks for activities
            context.Setup(m =>
                    m.CallActivityAsync<bool>("IsContinentSupported", It.IsAny<object>()))
                .ReturnsAsync(true);

            // set-up mocks for activity
            context.Setup(m
                    => m.CallActivityAsync<string>("GetSupplierOrchestratorForContinent", It.IsAny<object>()))
                .ReturnsAsync("CourierA");

            // set-up mocks for suborchstrators
            context.Setup(m =>
                    m.CallSubOrchestratorAsync<decimal>("CourierAOrchestrator", It.IsAny<string>(), It.IsAny<object>()))
                .ReturnsAsync(100);

            // ACT / When
            var price = await SagaToTestOrchestrator.RunOrchestrator(context.Object);

            // Assert / Then
            Assert.True(price.Shippable);
            Assert.Equal(100, price.Price);

        }

Now, let’s go through the code for a bit. As you can notice we have the three usual sections specific to AAA or GWT methodologies.

As you can see above, the biggest part of the test is the Arrange part. We need to mock all the activities that will be called in the test. Also, we need to mock the GetInput function of the orchestrator and also the suborchestrators.

After, in the “Act” section, we keep it quite simple, we just invoke the orchestrator and get the result.

The assert part then checks we check that the returned value from the orchestrator has the proper values / state.

Now, this code works, and the test passes, but I think there are parts that we can not test. For example using this approach of testing we could never test functions that for example would send events, since they do not influence in any way the result.

Also, there might be cases where you would have Orchestrators that wouldn’t have any concrete result, such as the ones that watch the service bus, and react in some way and then never return anything.

The “Flow Testing” Way

In order to fix this, I use something that I like to refer to as “Flow Testing”. Basically this way, we don’t really want to assert the end result, since in reality we mock all in inputs and functions, so the chances are quite high that the result is good. Instead we will focus on testing that the proper elements in the “flow” were called the right number of times. If “flow” is unclear, you could read more about this here.

So, let’s see some code, and then we will discuss a bit further.

        // V2: The Flow Way
        [Fact]
        public async Task CalculatePriceForEurope()
        {
            // Arrange / Given
            var orchContext = new SagaContext
            {
                Continent = "Europe"
            };
            var context = new Mock<IDurableOrchestrationContext>();

            // mock the get input
            context.Setup(m =>
                m.GetInput<SagaContext>()).Returns(orchContext);

            //set-up mocks for activities
            context.Setup(m =>
                    m.CallActivityAsync<bool>("IsContinentSupported", It.IsAny<object>()))
                .ReturnsAsync(true);

            // set-up mocks for activity
            context.Setup(m
                    => m.CallActivityAsync<string>("GetSupplierOrchestratorForContinent", It.IsAny<object>()))
                .ReturnsAsync("CourierB");


            // set-up mocks for suborchstrators
            context.Setup(m =>
                    m.CallSubOrchestratorAsync<decimal>("CourierAOrchestrator", It.IsAny<string>(), It.IsAny<object>()))
                .ReturnsAsync(100);

            context.Setup(m =>
                    m.CallSubOrchestratorAsync<decimal>("CourierBOrchestrator", It.IsAny<string>(), It.IsAny<object>()))
                .ReturnsAsync(120);

            // mock the publish activity
            // at the time of writing, there is no way of mocking CallActivityAsync so we need to use the generic version
            context.Setup(m =>
                m.CallActivityAsync<object>("PublishCalculatedPriceActivity", It.IsAny<object>())
            );


            // ACT / When
            var price = await SagaToTestOrchestrator.RunOrchestrator(context.Object);

            // Assert / Then

            context.Verify(
                m => m.CallActivityAsync<bool>(
                    "IsContinentSupported",
                    It.IsAny<object>()),
                Times.Once);

            context.Verify(
                    m => m.CallActivityAsync<string>(
                        "GetSupplierOrchestratorForContinent", It.IsAny<object>()),
                    Times.Once
                );

            context.Verify(m =>
                    m.CallSubOrchestratorAsync<decimal>("CourierAOrchestrator", It.IsAny<string>(), It.IsAny<object>()),
                Times.Never);

            context.Verify(m =>
                    m.CallSubOrchestratorAsync<decimal>("CourierBOrchestrator", It.IsAny<string>(), It.IsAny<object>()),
                Times.Once);

            context.Verify( m =>
                    m.CallActivityAsync<object>("PublishCalculatedPriceActivity", It.IsAny<object>()),
                Times.Once
            );

        }

So, as you can see, using this approach, we do not validate the the values of the mocks that were returned, which is a bit silly, but we test that given the proper values, the orchestrator flow is behaving how we expect it.

Of course, this isn’t an either or problem, so the two ways could be easily combined and do both, but in my case, my main orchestrator rarely returns anything, it’s there to glue several systems together, and most of the time, it is running of a service bus trigger. So this way is better suited.

Bonus, the parameterized flow way

Well this last part, is something that people usually have mixed feelings about, we will be using the [Theory] and [MemberData] attributes from xUnit, to parametrize our unit tests.

This could be achieved more or less due to the fact that this kind of test are rather repeatable and contain a lot of boiler plate.

Here is how the code looks:

       // V3 : Parameterized Flow
        [Theory]
        [MemberData(nameof(DataSourceForTest))]
        public async Task TestUsingTheory(OrchestratorTestParams pTestParams)
        {
           // Arrange / Given
            var orchContext = new SagaContext
            {
                Continent = pTestParams.Continent
            };
            var context = new Mock<IDurableOrchestrationContext>();

            // mock the get input
            context.Setup(m =>
                m.GetInput<SagaContext>()).Returns(orchContext);

            //set-up mocks for activities
            context.Setup(m =>
                    m.CallActivityAsync<bool>("IsContinentSupported", It.IsAny<object>()))
                .ReturnsAsync(pTestParams.IsContinentSupported);

            // set-up mocks for activity
            context.Setup(m
                    => m.CallActivityAsync<string>("GetSupplierOrchestratorForContinent", It.IsAny<object>()))
                .ReturnsAsync(pTestParams.SupplierToBeReturnedFromContinentOrchestrator);


            // set-up mocks for suborchstrators
            context.Setup(m =>
                    m.CallSubOrchestratorAsync<decimal>("CourierAOrchestrator", It.IsAny<string>(), It.IsAny<object>()))
                .ReturnsAsync(pTestParams.ValueForCourierA);

            context.Setup(m =>
                    m.CallSubOrchestratorAsync<decimal>("CourierBOrchestrator", It.IsAny<string>(), It.IsAny<object>()))
                .ReturnsAsync(pTestParams.ValueForCourierB);

            // mock the publish activity
            // at the time of writing, there is no way of mocking CallActivityAsync so we need to use the generic version
            context.Setup(m =>
                m.CallActivityAsync<object>("PublishCalculatedPriceActivity", It.IsAny<object>())
            );


            // ACT / When
            var price = await SagaToTestOrchestrator.RunOrchestrator(context.Object);

            // Assert / Then

            context.Verify(
                m => m.CallActivityAsync<bool>(
                    "IsContinentSupported",
                    It.IsAny<object>()),
                pTestParams.IsContinentSupportedCalledTimes);

            context.Verify(
                    m => m.CallActivityAsync<string>(
                        "GetSupplierOrchestratorForContinent", It.IsAny<object>()),
                    pTestParams.GetSupplierOrchestratorForContinentCalledTimes
                );

            context.Verify(m =>
                    m.CallSubOrchestratorAsync<decimal>("CourierAOrchestrator", It.IsAny<string>(), It.IsAny<object>()),
                pTestParams.CourierAOrchestratorCalledTimes);

            context.Verify(m =>
                    m.CallSubOrchestratorAsync<decimal>("CourierBOrchestrator", It.IsAny<string>(), It.IsAny<object>()),
                pTestParams.CourierBOrchestratorCalledTimes);

            context.Verify( m =>
                    m.CallActivityAsync<object>("PublishCalculatedPriceActivity", It.IsAny<object>()),
                pTestParams.PublishCalculatedPriceActivityCalledTimes
            );
        }

As you can notice, this is almost identical to the “flow” version, the only difference being the use of parameters that are provided from outside.

In order to not have a gazilion of parameters passed, I created a simple container class that has all the params as properties. This looks like this:

        public class OrchestratorTestParams
        {
            public string Continent { get; set; }
            public bool IsContinentSupported { get; set; }
            public string SupplierToBeReturnedFromContinentOrchestrator { get; set; }
            public decimal ValueForCourierA { get; set; }
            public decimal ValueForCourierB { get; set; }
            public Times IsContinentSupportedCalledTimes { get; set; }
            public Times GetSupplierOrchestratorForContinentCalledTimes { get; set; }
            public Times CourierAOrchestratorCalledTimes { get; set; }
            public Times CourierBOrchestratorCalledTimes { get; set; }
            public Times PublishCalculatedPriceActivityCalledTimes { get; set; }

        }

Now if you are not familiar with xUnit’s Theory, it tells xUnit to run the unit test function multiple times for each element in the collection provided. Here is a nice article that I found which explains this in much more detail. In our case we used the member data way of passing the parameters. Speaking of which here is how this this looks.

        public static IEnumerable<object[]> DataSourceForTest =>
            new List<object[]>
            {
                new object[]
                {
                    new OrchestratorTestParams
                    {
                        Continent = "Europe",
                        IsContinentSupported = true,
                        SupplierToBeReturnedFromContinentOrchestrator = "CourierB",
                        ValueForCourierA = 100,
                        ValueForCourierB = 120,
                        IsContinentSupportedCalledTimes = Times.Once(),
                        GetSupplierOrchestratorForContinentCalledTimes = Times.Once(),
                        CourierAOrchestratorCalledTimes = Times.Never(),
                        CourierBOrchestratorCalledTimes = Times.Once(),
                        PublishCalculatedPriceActivityCalledTimes = Times.Once()
                    }
                },
                new object[] {
                    new OrchestratorTestParams
                    {
                        Continent = "North America",
                        IsContinentSupported = true,
                        SupplierToBeReturnedFromContinentOrchestrator = "CourierA",
                        ValueForCourierA = 100,
                        ValueForCourierB = 120,
                        IsContinentSupportedCalledTimes = Times.Once(),
                        GetSupplierOrchestratorForContinentCalledTimes = Times.Once(),
                        CourierAOrchestratorCalledTimes = Times.Once(),
                        CourierBOrchestratorCalledTimes = Times.Never(),
                        PublishCalculatedPriceActivityCalledTimes = Times.Once()
                    }
                },
                new object[] {
                    new OrchestratorTestParams
                    {
                        Continent = "Antartica",
                        IsContinentSupported = false,
                        SupplierToBeReturnedFromContinentOrchestrator = "CourierA",
                        ValueForCourierA = 100,
                        ValueForCourierB = 120,
                        IsContinentSupportedCalledTimes = Times.Once(),
                        GetSupplierOrchestratorForContinentCalledTimes = Times.Never(),
                        CourierAOrchestratorCalledTimes = Times.Never(),
                        CourierBOrchestratorCalledTimes = Times.Never(),
                        PublishCalculatedPriceActivityCalledTimes = Times.Never()
                    }
                }
            };

Quite nice I might say…

Looking at this, we have 3 scenarios running in this theory, the first and second are the exact same scenarios that we tested earlier, and the third on is testing how the orchestrator behaves if we pass un unsupported orchestrator.

Now, as stated earlier, this is not for everybody, some people like to have it scenario separated, some appreciate this way this works. I, personally use this only in very rare occasions, mostly for very repetitive tests, but I thought that it might be of interest for you.

The End

Well, hopefully you reached this far. Thank you for taking the time for reading this. As usual all the code is also available on github.

In the next article we will go through some ways to get even more value out of the orchestration tests, and also make them less verbose.

If you liked the content, I’d suggest you join the mailing list to get notified when the next article will be published.

Processing…
Success! You're on the list.