Simple ModelState Extension

Something might be out there like this already but I recently found myself needing to not only validate a model that has been posted or put into an API but also be able to easily roll up those validation errors and ship them back.  I know there are lots of ways to accomplish this but here was my approach.

First, start with the model and a validation

Super simple I know but we’ve got a basic DTO that has a requirement to always carry the Name property.

And we can have a controller that takes this model and validates it.

This is great.  We can do something interesting when we receive bad input.

But my requirement was to have my API return a list of all of the errors that came off the Model.  So my approach was to extend the ModelState class.  What this gave me the opportunity to do is iterate the errors, put them into a container and the return that as part of the status 400 response to the client so that they can do something interesting with the output.

So here is the extension method

Very straightforward.  What I’m doing here is looking at the ModelStateDictionary, looping the values and looking for errors.  If errors exist for  a specific value then I add the string error message onto a List<string> and then at the end return a container object that holds the errors.  If I wanted, I could put more on the ValidationResponse class but this is just an example so I believe you could further extend the concept.

The response class looks like this

Now your controller action looks like this.

You’ll have a response which is a 400 with an array of errors.

And your finished.  Hope this helps as you think about validating API input


Processing text/plain in ASP.NET Web API

Seems straightforward right?  And it is.  The only missing piece is that your API doesn’t know what to do when a form body comes across as “Content-Type”: “text/plain”

Have no fear, dotnet core has a class named InputTextFormatter that is a base class for TextInputFormatter which has derived classes for working with JSON and XML.  So the below snippets will get you a basic implementation for dealing with plain text and how to add it to your MVC pipeline

And to make sure that this code is injected in your MVC pipeline.  In your Startup.cs or wherever you register your dependencies

And then inside of your controller

And that’s it.  You’ve successfully handled text/plain in your controller and can be reused in any other controller action that needs to have text/plain as the content type.


AWS SNS HTTP POST to C# (dotnet core 2.1)

I spent way too much time on this today and I wanted to document what I found and a simple work around for what seems to be an oversight on AWS’ part.  (read some forums and you’ll see the threads)

When setting up an SNS Topic for instance when you have two services (APIs) that you want to connect together, you need to first confirm that the subscriber to can handle the messages. The part that took me forever to get (it is documented as such) but when looking at JSON messages all day, you don’t pay attention to the header.

Here is the link to the setup article

Section 1.  Notice that there is a sample payload for what you might expect.  The tip for you is to pay attention to the Content-Type in the header.  Although that body in the POST is JSON, they are sending it as text/plain.

And when using C#, having JSON as a parameter in [FromBody] is a no brainer with the serialization into your object happening automatically. However, with text/plain, C# doesn’t know what to do with the input.  The below method is just an approach to dealing with this.  I’m going to wrap this up at some point soon and create a custom InputFormatter which I’ll post once I get that done.  For now, the below code will get you going.  Best of luck and I hope this saves you some time!


Validating User/Computer Supplied Input

So I picked up a new project that looks like it might be a couple or three months long which is going to require a significant chunk of code to bump our current API along a version and has had me thinking.  A pattern I’ve been using lately looks a little bit like the below

validator is some set of functions defined in an external file that I use to process various aspects of the request’s body and then if successful can move along the pipeline.  This is obviously an improvement upon

But with the upcoming chunk of work, I really wanted something a little bit more elegant, maintainable and configurable.  A quick flashback for me a couple of months and I remembered driving home and listening to Nodeup where the topics was a former Paypal dev who talked about his experience with Hapi.  And that night I did some reading and I can remember liking their plugin architecture and one of those plugins was Joi

Joi describes itself like this “Object schema description language and validator for JavaScript objects.”  On the surface when I read this, I thought that sounds interesting.

Now fast forward a couple of months and here I am looking at an extensive project and not really liking my validation pattern.  Sure it’ll work.  And honestly, there are only a couple of us coding so it won’t kill us to have a little bit more verbose code but when I lay my head down at night, I’ll know it’s’ not quite what I want.   A few searches later and I found express-joi-validation

What I like about this module is that you can use it as middleware which means that it fits into the flow and isn’t applied after the flow.  In additional, you describe the validators as Joi schemas which might be my favorite part.

I don’t have a ton of examples yet, but just wanted to post to bring awareness.  If you find yourself needing something like I’ve described, take a look.  I continue to be impressed with the Node.js community and all of the cool things that people are willing to contribute.  I hope and plan as I get deeper into this journey that I’ll start contributing as well.  We get better together.



Error Handling with Node.js and Express

Probably a topic that has been written on quite a bit but as with some blog posts, doing it one more time is hopefully another piece of community documentation as well as potentially helpful to someone struggling with a task.

Express is probably the web framework that most people hear/learn about first and with my limited Node experience the first one I came across.  The following paragraphs won’t be about competitors to it or really more beyond what the title describes, but if you are interested, check out (there are several others)

This week while looking at a bug in our platform, I realized quickly that the application was not the problem, yet the underlying API which is written in Node and utilizing Express.  I kept seeing in the logs “Can’t set headers after they are sent to the client”.  When looking at the route, the first thing I noticed was that when an error was occurring the code was just using 500 as a catch all and then calling

Beyond the generic exception, in the app.js there was an attempt at an error handler setup which was also sending headers back to the requestor.  So to clean this code up and put a better status on response I changed to this

So I cleaned up the code inside of the route which both returns a response correctly and doesn’t double send headers as that res.status(200).json({}) won’t be called unless there truly are no errors in the processing that the API is doing. Now popping on over to app.js which handles app setup and route configuration, I added the following at the bottom of the file.  Important Note: Adding a wildcard route must be at the bottom as it is essentially a “fallout” and it’ll match anything.

Few things about the above.

  1. Note that before anything I have that wildcard route and I’m treating those requests as Not Found and returning 404 as the HTTP status.
  2. Development based route for error handling.  In the payload in the response, I’m setting error: err which will show the actual descriptive error that I probably care about only when in development. Notice no next() as I want to end the pipeline right there
  3. The production handler that only returns the HTTP status and the error message I want.  Such as “Invalid Request”, “Bad Request, check your parameters” or whatever I want to return to the caller.

Like I mentioned in my last post, I’ll be sharing things I’ve learned, am learning or just general fodder along the way.  I hope this is helpful to someone and if not applicable to what you are working on, perhaps just something new that you can file away!


My First Month with Node.js

First off, I’ve been biased for years when it comes to JavaScript.  I remember the early days of writing code that was specific to browsers, multiple implementations of accessing the DOM and then just the general lack of tooling and libraries that existed out there.  For years, I wanted everything to be server side because writing front-end code was just not what I was in to (not that I learned Flash or Silverlight either).  Needless to say when I joined up at my new company I wanted to start fresh and to give their stack a chance and it is a far cry from the C#, ASP.NET that I’ve been doing for years.  I actually haven’t been coding much for the past few years and was planning on continuing along the same path here but once I got into the problem domain and the stack, I just dove in all giddy like the first day that I opened VI in CSCI 101

So one month in … you can call me pleasantly surprised and thoroughly excited to learn more and sharpen my skills.  A few quick things I’ve enjoyed (some Node specific and some just developer enjoyment)

  • Modularity – simple and easy to add external sources, modularize your code and share/extend with other projects
  • Simplicity – I find myself focusing less on frameworks, dependencies and more on just writing the logic that is important to our projects
  • Testability – code is easy to test.  Mocha/Chai combined together have been a breeze to work with
  • VSCode – I know, not Node related but wow I love this editor and its extensions
  • MongoDB – ease of working with Mongo whether it be the ODM (Mongoose) or writing queries with the native lib.

Things I’m getting used to

  • Working again with a scripted language.  I’m using a Linter but it is still taking some time to get used to that.  Also, working more with hashes vs strongly typed objects (this again is probably more related to my understanding, learning and skill at the moment)
  • Callbacks and thenables.  Async workflows are taking me some time to get used to but I do like it.  Spinning up threads while problematic were how I was taught and a pattern I’ve used so not a bad thing, just taking time to really get used to it.

I’m going to be sharing more about my journey from C# and .NET to Node.js and the other exciting things I’m going to be up to over the next few months.  Would love feedback and dialogue along the way!


Takeaways from my First Analytics Conference


I went to my first Analytics conference this week here in Dallas and (#GartnerBI) what I wanted to share were just some general impressions around the market and a cool strategy for embracing the change that is occurring in our businesses as it relates to data and analytics.

It’s amazing to me how just 2 – 3 years ago, the concept of a Data Lake or even a Data Scientist really didn’t exist and now you can’t go anywhere in the data space without hearing those words uttered.  Don’t even try to mention “Big Data” anymore and expect to be ahead of the curve.  Funny thing is, lots of folks are talking about it, but I couldn’t find a large number of folks that are doing it outside a select few. (well not doing it as the industry says we should)


Wow!  There are so many tools.  Which to me signifies that the market is still emerging.  It’s still being defined.  I would expect to see a great many of these either become the standard or get absorbed into a larger offering.  Everything from data preparation to visualization.  You can build dashboards, semantic layers, transformations and extracts with ease.  All in a distributed fashion with little IT.  Doesn’t that sound fun?  Depends on who you ask I guess (sounds kind of cool to me)  I’m not really going to dig into vendors etc, but my opinion is that there is so much choice that there is something for everyone.  And you don’t even have to go with a big named vendor to get a very robust and enterprise capable solution.


Best I can tell the industry is starting to settle around certain platforms for specific usage. For a Data Lake, Hadoop seems to be the solution of choice.  Add on top the things that a distro brings to the table and Hadoop can be a one stop shop for your data ingestion, processing and warehousing needs.  Now whether you go with Vendor A or Vendor B, that’s your personal choice.  Want rapid, streaming, multi tool access to lots of data, In-Memory processing is the way to go.  And I don’t mean the In-Memory capabilities of a columnar data store, I’m talking about Spark or others that can connect to multiple sources, pull into memory and allow that speed and agility that many are requesting.  And what I liked about some is that they fit right in with the Hadoop ecosystem while at the same time, could be standalone.  As for Visualization … too many to list and not really exciting.

But what was! This thing called Search Based Analytics.  There were a couple of vendors showing demos which I totally dug.  The idea that a user/customer could type in a search like Google and the platform understands what you are looking for and displays charts and tables based on search criteria about your data.  Really fascinating concept.


So many topics and tools were centered around this idea of distributed data prep, data quality and advanced analytics.  All done without IT.  And more importantly done much closer to the business and data where true value can be realized.  Sidestep: I love small companies … always have actually.  The ability to make an impact on revenue or a direct contribution towards solving a customer’s problem excites me.  This idea of self-service almost puts each department/BU in start-up mode.  You can add and provide value on your own without some big hefty BI process.

IT is going to get left behind if they don’t enable people to bring value sooner and cheaper. Our job in technology is not going to always be leading.  That’s why I love the word Enabler.  A CIO in this day and age could almost be re-branded as a Chief Enablement Officer.  Because that’s what we should be doing. Leading is a task, not an identity.  (that’s a whole other post)


So I’ll be honest, I’m not huge into tools and software.  I’ve often said to folks that I really don’t like computers.  I get a funny look being that I’ve made my career off of them and really seem to enjoy them.  Then I qualify that with, I like solving problems.  And computers and the computational and storage power are wonderful vehicles for solving problems.  So tools aside, the one thing I was hoping to learn I actually did.  Kudos again to the conference for providing something for everyone.

Gartner has this notion of being Bi-modal when it comes to how an organization approaches BI & Advanced Analytics.  On one hand you have this often traditional, heavier, enterprise grade process that gathers needs and cranks out capabilities in the robust traditional BI platform.  Think data warehouse with ETL and everything in between.  What you get from this is stability, security, guarantees and predictability.  All wonderful things.  But what you generally don’t have is flexibility and agility.  That’s where the modern data warehouse and advanced analytics platforms come in.  You can pull all kinds of data out of your data lake/flat file/DBMS/services etc in some kind of self service tool and then apply whatever math or code or discovery on top of that you like to produce some insight. Maybe it’s valuable.  Maybe it’s not.  But what I like about the approach and thought process Gartner was pitching was this. (to tie this up)


It’s not an either or type scenario.  Perhaps the self-service model is just fine for data sources that are too volatile to ever contain in traditional ETL processes. Or perhaps they aren’t as evolving and when the time is right, that set of flows with lineage is migrated into your strongly typed ETL capabilities and loaded up into SQL Server or Oracle or Vertica (insert some other data store) for reports and dashboards.  It really depends.  And most scarily, it’s up to you as the implementer of this stuff.


I often thought this space was much more prescriptive on how to do things than the world I was used to in application development.  But with these new sets of platforms, tools and most importantly PROBLEMS, perhaps that is changing. I used to only care about a DBMS because it’s where my app persisted some data.  I often cared even less because an ORM or a custom rolled Data Mapper abstracted it.  Now I find myself wondering … perhaps I’ve been missing something.


Scrum or Kanban

I was asked this the other day by a scrum master at my current organization.  For disclosure, I’ve been involved with Agile delivery for about 8 years or so.  I started out with Scrum and have been involved with Scrum, XP and Kanban ever since.  I’ve been a developer, a delivery leader, a coach and a trainer over this period of time so hopefully you’ll see my opinion as being informed.

My response to this individual’s question was that I can take a newly formed team and get them to predictability in Scrum regardless of talent and skills, but just a general desire to learn the framework.  Scrum is very good at providing a framework for delivery.  It’s got artifacts, ceremonies and there has been so much literature written about the topic that there is no shortage of finding guidance.  Notice I didn’t mention anything about self organizing or empowered teams.  To me, this is a basic ingredient no matter what direction you choose.  If you desire for a more command and control type structure, perhaps you are better off not reading any further.

So what about Kanban … I have two things I like to consider when moving to Kanban.

  1. Do I have a  high performing team that is mature and looking for ways to get even better
  2. Do I have a new team that is made up of strong folks who are responsible enough to wield the power of a lean delivery model

If I don’t have either of those, I tend to shy away from Kanban.  Reason is, I’ve seen too many teams take advantage of the things about Kanban that make it so powerful.  Such as, with no time box, people work endlessly.  Again without a time box, the size of a work item tends to grow.  With no defined sprint planning, team members don’t work together to plan work.  With no defined retro, they often don’t improve on demand.  With no prescribed demo cadence, an on demand demo just doesn’t seem to occur.  So unless you have the right folks or a strong leader, these things just don’t occur.

I get it, you could argue that all of the missing components in Kanban could be added in, but to be honest, I don’t like that form of prescriptive nature.  I believe when Kanban is working, demos spawn at any point, retros occur when someone feels the need to improve.  I do still like cadence based planning but other than that, I’m not really in love with the idea of ScrumBan.

So to wrap this up, my opinion is people are what’s important, not necessarily the process. I’ve got plenty of experiences that show this at small scale as well as at large scale while implementing the Scaled Agile Framework.  It doesn’t truly matter at the end of the day what method you use, but that you choose the one that best fits your people and tailor it according to your environment.  Know what success looks like and measure your progress towards that and I think you will be just fine.


PMO in Agile

This keeps coming up in quite a few conversations I’ve been having lately.  Does the PMO have a place in an Agile Software Delivery team/company/project etc?  I have a couple of opinions on this and its really an either/or.

If your organization currently has a Project Management Office and you have a strong hierarchical reporting structure, I believe that you can repurpose this group of folks.  Especially in the case that the people in the organization are really looking to embrace a new role.  Don’t get me wrong, this will come with challenges.  Traditionally the PMO is a command and control type org that breaks down work and assigns out tasks and manages to delivery.  In an Agile model, you typically don’t have much use for this type of behavior as you are working with a group of empowered, self-organizing and intelligent knowledge workers.  However, what I think often goes left unsaid is the need for vertical reporting of team data as well as program level data.  As a delivery leader, I’m not so interested in sprint burn downs, task burn downs etc, but I am interested in predictability metrics, feature completion against a roadmap and overall quality of the codebase.  This is stuff that I think the PMO is very well suited for.  In this model, I’d call them “Ambassadors of Transparency”.

Now if you are starting something up from scratch, my opinion is that adding the layer of a PMO is probably unnecessary.  With the right tooling and the right scrum master or product owner, I believe that this information can be shared and flowed in a different manner.  I find that in this model, the team(s) are all in on the upcoming milestones and they are owners of their data in addition to their code so they are more than happy to share their progress.  Basic scrum doesn’t really have a recipe of this and neither does the kanban method, but I think that’s what I like most about finding the right balance of process and people … you tailor to your needs based on the framework.


Visualization for Old Metrics, with a Twist

When looking at how to measure a team’s predictability, it’s often common to start with their capacity.  In the case of a Scrum team (or even some Kanban teams) story points are a good place to start.  What I’ll usually look for is, what is a team’s average velocity.  (Sum of story points accepted / number of sprints completed)  Once I’ve got that calculated, I will take a look at the trend of percentage of planned work vs delivered work.  What I mean by that is, coming out of Sprint planning or Queue replenishment, I’d expect the team to place a planned velocity on that sprint.  Then at the end of the sprint, I’d calculate the percentage.  For instance

Planned 20 points during sprint planning
Accepted 19 points by the end of the sprint
That then equals a 95% delivery rate.

But let’s say that a team seems not not be on track for the current release plan.  There are always a million different reasons for this.  But if we would look at things at a system level and not necessarily a team/person level, you might be surprised at what you find.  So I started producing the following chart


One of the things I like to see teams do is categorize their work as it sits on the backlog.  In the case of the above, Yellow work is “Production Support”, Green work is “Normal Development”, Orange work is “Maintenance” and the missing blue one is “Expedite”.  What I really like about this is that when plotted as a column chart, you can easily see where the story points by sprint are being completed.  This could easily be a throughput chart if you were doing Kanban, but I like to show the points if I have them.  Where I see the biggest value is when plotting the “Planned” velocity as a line and overlaying on top of the columns.  This gives you a nice visualization of how a team is doing sprint over sprint compared to their plan.  As an example, Sprint 3, the team takes a bit of a hit on the number of “Normal” work they completed but as you see they still hit their planned velocity.  Those points were divided up between “Production Support” and “Maintenance”.    Really cool when you combine these two data points together.

Of course with any metric or visualization, there is always more to the story.  I’ll always lean towards knowing my team and what’s going on with the dynamics vs blindly trusting some chart.  However, if you know somethings not right and you aren’t quite sure how to diagnosis, data is your friend.