Browsed by
Category: Uncategorized

Having a bias towards delivery

Having a bias towards delivery

So I’ve been having a lot of conversations lately about Scrum, and agile processes and it got me thinking. What is the difference between an effective agile team, and an ineffective one? What is it that makes some projects real excel in an agile world, and others not.

And as I was thinking about this, it got me thinking of the most effective developers I know. I’ve had the privilege of working with some really inspirational developers and engineers in my career and the more I thought about it, the more I realized there was a common thread between them all.

They all had a bias towards delivery.

So what exactly does that mean? And how does that lead to success in this field? Let’s start with the term bias, according to Merriam Webster, a bias is “an inclination of temperament or outlook.” So the bias is a focus on what we are inclined to do.

So when I’m talking about a bias towards delivery, I am talking about focusing our efforts with an inclination towards what the delivery will be, or what the outcome of the effort will be.

Isn’t this just Agile?

One of the most common problems in Agile development is that many of the team members or development teams, will often just approach scrum as a “take a checkpoint every two weeks.” Or how often we have this discussion of the idea of “definition of done.” All of this built around how we break up work, and that’s all valuable, but all too often I still see this fail because it fails to prioritize shipping software over going through the motions.

So as much as those activities are valid, I do feel like all too often the delivery element is lost in the discussion. IF you look at the Agile Manifesto, the creators of this philosphy saw this all too clearly. The very first principle of agile is:

Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.

Principles behind the Agile Manifesto


So this is all great discussion but let me get tactical on how this all too often is a problem for software engineering teams. How many times have you seen user stories that have no acceptance criteria or worse just say “meets definition of done.” This does not cover or provide definition for what the delivery is at the end. This just says some boilerplate cut-line that’s supposed to be a guide.

Let’s take a couple of concrete examples, first you have the following as a User Story:

  • Title: As a user, I should be able to edit my profile and make changes to my contact information.
  • Description: The user should be able to update contact information in their profile including email address, phone number, or twitter handle.
  • Acceptance Criteria: The solution will include all required changes, and appropriate unit tests, and conform with our definition of done.

Now to some people that looks perfectly fine and reasonable, but I actually argue that it is very ambiguous and doesn’t have a focus on delivery. My biggest question, “What is the expected delivery at the end of the sprint?” For example I would ask the following questions:

  • Is the expectation that I just have a PR Submitted?
  • Does it need to be merged?
  • What should a demo of this to the product owner look like?

It is not immediately clear looking at this work item what delivery looks like of this solution. Now this one isn’t terrible, and we can probably derive a lot of the information from standard practices in an established team, but the next example I have is much worse. The most common culprit I’ve found is the infamous “Research Spike”.

You’ve probably seen stories like this before:

  • Title: Spike – Research options of using PostGres vs Cosmos for Geospatial queries.
  • Description: Evaluate and compare the benefits of using PostGres as a GIS data store vs Cosmos DB for Geospatial queries.
  • Acceptance Criteria: Complete evaluation of technologies to help empower architectural decision.

Now this is really not a good use of effort, and provides almost zero guard rails the developer or support for timeboxing this evaluation. There is no real way to discern what specifically the evaluation would entail, and leads to things like “Moving this evaluation forward as we could use more time.”

A great way to change this I would argue is to focus on what will the delivery in 2 weeks look like. And also get very specific about what the focus of the spike will be. So if we change the above to something like this:

  • Title: Spike – Research options of using PostGres vs Cosmos for Geospatial queries.
  • Description: Evaluate and compare the benefits of using PostGres as a GIS data store vs Cosmos DB for Geospatial queries. Evaluation will provide the following key answers:
    • What are the potential performance gains to query execution times?
    • What new features will be available after changing platforms?
    • What potential feature loss could occur with the move?
    • What are the cost implications of the switch?
  • Acceptance Criteria: Delivery will include a report focusing on the questions above a Proof of Concept showing the relevant details from the report during our sprint demos.

Looking at the above, it becomes very clear what the expectations of the spike, and the delivery at the end of the two weeks looks like. It also means that the odds of this item “moving to next sprint” is very low. There is a possibility of this leading to additional research spikes, but it focuses the effort on what should be delivered.

How can I start adopting this approach?

At the most basic level a common practice of many of the great engineers I’ve had the privilege to work with is that they start the sprint by looking at each work item and saying, “What am I going to deliver at the end of the sprint for this?” And they decide on what that is, and work backwards to what they need to do to get there. I would recommend starting there and aligning your mindset towards what you are ultimately going to deliver. This will cause you to ask more questions of the work items your assigned and push you towards more the tangible delivery items that you should be focused on.

Errors and Punishment – The Art of Debugging

Errors and Punishment – The Art of Debugging

So recently I have been blessed enough to talk to several people, who are new to the software development field, and been able to do some mentoring. And firstly, I’m the one that’s lucky for this, as there are few things better than meeting with people who are new to this industry and getting to engage with their ideas. If it isn’t something you do regularly, you should start.

But one of the things that has become very much apparent to me, just how little time is spent actually teaching how to debug. I know I saw this when I was teaching, but there’s this tendency by many in academy to show students how to code, and when they run into errors show them how to fix them. Which at it’s core sounds like “Yes, Kevin that’s what teachers do…” but I would actually argue it is a fundamentally flawed principle. The reason being that error messages and fixing things that are broken is a pretty large part of being a developer, and by giving junior developers the answer, we are doing the preverbal “giving them a fish, rather than teaching them to fish.”

To that end, I wanted to at least start the conversation on the a mindset for debugging, and how to figure out what to do when you encounter an error. Now obviously I can’t cover everything, but I wanted to give some key tips to how to approach debugging when you have an error message.

Honestly, debugging is a lot like a police procedural, and it’s a good way to remember the steps, so hang with me through the metaphor.

Tip #1 – Start at the Scene of the Crime – The Error Message

Let’s be honest

Now I know this sounds basic, but you would be surprised how often even senior devs make this mistake. Take the time to stop, and really read the error message and what I mean by that is do the following:]

  • What does the error message tell you?
  • Can you find where the error is occurring?
  • Is there a StackTrace?
  • What component or microservice is throwing the error?
  • What is the error type?

Looking at an error message is not just reading the words of the error, but there are usually other clues that can help you solve the mystery. Things such as the exception type, or a stack trace where you can find the exact line of the code is going to be critical.

Honestly, most people just read the words and then start making assumptions about where an error occurred. And this can be dangerous right out of the gate.

Tip #2 – Look for Witnesses – Digging through logs

Now, in my experience an error message is only 1 piece of the puzzle / mystery, the next step is to really look for more information. If you think about a police procedural on TV, they start at crime scene, but what do they do next…talk to witnesses!

Now, in terms of debugging we have the added benefit of being able to refer to logs. Most applications have some form of logging, even if it’s just outputting messages to a console window, and that information can be very valuable in determining an error message’s meaning.

Start looking for logs that were captured around the same time, specially looking for:

  • What was occurring right before the error?
  • What data was being moved through the solution?
  • What was the request volume that the system was handling?
  • Were there any other errors around the same time?

Any information you can find in the logs is critical to identifying and fixing the issue.

Tip #3 – Deal only in facts

Now this next on, is absolutely critical, and all to commonly overlooked. Many developers will start making assumptions as this point, and start immediately announcing, I know what it is and start changing things. Resist this urge, no matter what.

Now, I’m not going to lie, some errors are easy and with a little bit of searching it becomes really easy to see the cause and address it, and if you are 100% sure, that should be the case. But I would argue in the TV procedural perspective, this is the different between the rookie and the veteran. If you are new to this field, resist the urge to jump to an answer and only deal in facts.

What I mean by this is to not start letting your jumping to conclusions cloud the story you are building of what occurred and why.

Tip #4 – Keep a running log of findings and things you tried

This is something I do, that I started and it pays dividends. Just like the cops in a police procedural, they make a case file as soon as they capture their original findings, and you should to. Keep a running document, either in word, or for me I use OneNote. I will copy into that document all the findings.

  • Error Messages
  • Relevant Logs
  • Configuration Information
  • Dates / times of the errors occurring
  • Links to documentation

Anything I find and I will keep appending new information to the document as I find it.

Tip #5 – Look for changes

The other key piece of evidence most people overlook is the obvious question of “What changed?” Code is static, and does not degrade at the code level overtime. If it was working before and isn’t anymore, something changed. Look for what might have changed in the solution:

  • Was code updated?
  • Were packages or libraries updated?
  • Was a dependency updated?
  • Was their a hardware change?

All of this is valuable evidence to helping to find your reason.

Tip #6 – Check documentation

A good next step is to check any documentation, and what I mean by this is look to any reference material that could explain to you how the code is supposed to work. This can include the following:

  • Documentation on libraries and packages
  • ReadMe / GitHub issues / System Docs
  • Code Comments

Anything can help you better understand how the code is supposed to work and identify the actual way the code is supposed to behave.

Tip #7 – Trust Nothing – Especially your own code

At this stage, again people like to make assumptions, and I can’t tell you the number of times I have done this personally, but you stare at code and say it doesn’t make sense. I know X, and Y, and Z are correct, so why is it failing? Only to find out one of your assumptions about X, Y, or Z was false. You need to throw all assumptions out the window and if necessary go and manually verify everything you can. This will help you identify the underlying problem in the end.

Also at this stage I see the other common mistake. Keep your ego out of debugging. Many developers will look at the code they’ve built and they trust it because they built it. But this bias is usually the most damaging to your investigation.

Similar to the running joke of “The husband always did it…” I recommend adopting the philosophy of “Guilty until proven innocent” when it comes to any code you write. Assume that something in your code is broken, and until you can prove it, don’t start looking elsewhere. This will help in the long run.

Let me give an example, let’s say I am building code that hits an API, and I write my code and it looks good to me, and I go to run it and I get back a 404 error saying not found. I’ve all too often seen devs that would then ping the API team to see if their service is down, or networking to see if something is blocking the traffic, all before even looking to see “Did I get the endpoint right?”

Doing this makes you look foolish, and wastes people’s time. It’s better to verify that your code is working properly, and then that will empower you to have that conversation with networking as:

You: “I think it’s a networking issue.”

Network Engineer: “Why do you think that?”

You: “I’ve done the following to rule out anything else…so I think it could be ________________”

Tip #8 – Try to reproduce in isolation / Don’t Make it a hatchet job!

If you get stuck at this point, a good trick I find is to try and reproduce the error in isolation, especially when you are looking at a microservice architecture, there can be a lot of moving parts. But it can be helpful to try and recreate an error away from the existing code base by isolating components. This can make things easier to give evidence, and not unlike a police procedural where they try to reproduce the events of a theory, it can be a great way to isolate a problem.

The one thing to try really hard to avoid, is taking a hatchet to code, all too many times I’ve seen people start doing this pattern to solve a problem:

  • I’m going to try this…
  • Run Code
  • Still Broken…
  • Change this…
  • Run Code
  • Still Broken…

You are actually making your life harder by not being methodical, now I’m not saying don’t try things, but try to be more deliberate and make sure you take time to log your thoughts and attempts if your running log. This can be critical to keeping things logical and methodical and not spinning your wheels.

Tip #9 – When you find the answer right it down.

When you finally find the answer, there is this tendency to celebrate, and push that commit, cut that PR and be done. But really your not doing yourself any favors if you stop there. I find it helpful to make sure you take the time to answer the following:

  • Do I fully understand why this occurred?
  • Can I document and explain this?
  • Am I convinced this is the best fix for this problem?

Really you want to make sure you have a full understanding and complete your running log by documenting the findings so that you can refer to them in the future.

Tip #10 – Make it easier and test in the future

The other thing that is largely overlooked and skipped due to the “Fix Celebration” is the debrief on the issue. All to often we stop and assume that we are done because we made the fix. But really we should be looking at the following:

  • Is there an automated way I can test for this bug?
  • How will I monitor to make sure my fix worked?
  • Does this hot fix require further work down the line?
  • Does this fix introduce any technical debt?
  • What can I do to make this type of error easier to debug in the future?
  • What parts of the debug and testing cycle made it hard to identify this error?
  • What could I have done differently to make this go faster?
  • What does this experience teach me?

These kinds of questions are critical to ongoing success in your software development career and the health of your project longer term.

I hope you found these 10 tips helpful!

How to leverage a private modules in your YAML Pipelines

How to leverage a private modules in your YAML Pipelines

I’ve made no secret about my love of DevOps, and to be honest, over the past few months it’s been more apparent to me than ever before that these practices are what makes developers more productive. And taking the time to setup these processes correctly are extremely valuable and will pay significant dividends over the life of the project.

Now that being said, I’ve also been doing a lot of work with Python, and honestly I’m really enjoying it. It’s one of those languages that is fairly easy to pickup but the options and opportunities based on it’s flexibility take longer to master. But one of the things I’m thankful we started doing was leveraging python modules to empower our code re-use.

The ability to leverage pip to install modules into containers creates this amazing ability to separate the business logic from the compute implementation.

To that end, there’s a pretty common problem, that I’m surprised is not more documented. And that’s if you’ve built python modules, and deployed them to a private artifact feed, how can you pull those same modules into a docker container to be used.

Step 1 – Create a Personal Access Token

The first part of this is creating a personal access token in ADO, which you can find instructions for here. The key to this though is the PAT must have access to the packages section, and I recommend read access.

Step 2 – Update DockerFile to accept an argument

Next we need to update our Dockerfile to be able to accept an argument so that we can pass that url. You’ll need to build out the url your going to use by doing the following:

https://{PAT}{organization}/{project id}/_packaging/{feed name}/pypi/simple/

Step 3 – Update YAML file to pass argument

This is done by adding the following to your docker file:

ARG feed_url=""
RUN pip install --upgrade pip
RUN pip install -r requirements.txt --index-url="${feed_url}"

The above will provide the ability to pass the url required for accessing the private feed into the process of building a docker image. This can be done in the YAML file by using the following:

- task: Bash@3
    targetType: 'inline'
    script: 'docker build -t="$(container_registry_name)/$(image_name):latest" -f="./DockerFile" . --build-arg feed_url="$(feed_url)"'
    workingDirectory: '$(Agent.BuildDirectory)'
  displayName: "Build Docker Image"

At this point, you can create your requirements file with all the appropriate packages and it will build when you run your automated build for your container image.

Book Review – Clockwork

Book Review – Clockwork

Hello all, so as many of you know I read a good bit, and I also like to use “audible”. Great way to pass time while traveling or driving is to listen to audio books. Right now I’ve been on a real kick to learn more about business perspective and productivity. Some of the great books I’ve read and talked about before are:

  • Grit
  • Essentialism
  • 10x Rule
  • Deep Work

But I wanted to take a minute to talk about the book I just finished, Clockwork: Designing your business to run itself, by Mike Michalowicz. Now I have to admit, I found out about this book when I heard about it a couple of times and it showed up on my “recommended reading” books a few times.

So I was skeptical about this book, mainly because the book talks about how its focused on people starting their own business, and I work for a major corporation. So how can this be helpful to me? Well I have to admit, I was wrong.

I found this book to be really thought provoking, and it caused me to re-examine a lot of activities and work I do to measure impact and importance to success. The author makes the argument that in any organization, everyone has a responsibility to do the following:

  1. Protect the Queen Bee Role
  2. Serve the Queen Bee Role

And basically the key part of the business is to take the QBR (the Queen Bee Role) which is the crucial part of your job, and make all of your actions that you take focus on that above all others. Basically the argument is that I should spend every second of my work day focusing on that QBR, and when an activity takes away from that, I should focus on getting done with that as soon as possible, or if possible moving it off my plate.

The intention is that it makes me focus on the bigger picture and creates a scenario where you can take off from work and feel comfortable. For me, I have a tendency to have a hard time unplugging, and stepping away from work. And recently I’ve been setting goals to help myself to unplug. I found that when I started to put this into practice, I was able to unplug with less stress and it helped my overall mental health. For me, I started with the intention of doing the following:

  • Blocking 1 hour for lunch everyday
  • I will not eat at my desk

This forces me to take a lunch break away from my desk, and honestly it sounds small but it has paid huge returns, I have found that when I come back to work I am more focused, and at the end of the day less drained from a mental perspective. I find that stress level has gone down with regard to work and I also find that the work I’m doing is much more satisfying.

Below is a video that summarizes some of the ideas of the book. The value of this book though aren’t the ideas, but how you execute.

Building a facial search in less than 2 hours

Building a facial search in less than 2 hours

So, I’ve been pretty up front that I’ve been expanding my current skills in the data science and AI space, and its been a pretty fun process, and I wanted to point everyone to a demo I was able to build out very quickly.

Facial Searching, is a pretty common use case, so much so that we see it everywhere, Google Photos allows you to tag people in photos and indexes them for you automatically. Facebook makes suggestions for tagging people when you upload new pictures.

So I wanted to build a service that would take a set of selected images or 3 members of my family, and build a solution that would allow me to run any family photo through and it would search for the known members of my family to apply. Seems like a pretty basic use-case, but I’ve been wanting to get some hands on experience with the Azure Cognitive Services.

So I researched and read through our documentation, and decided before I started I was going to set aside 2 hours and see how far I could get to do the following:

  • Build a console app to read in images of 3 family members.
  • Build logic to upload an image and read back attributes about that person, things like age, gender, hair color, glasses, etc.
  • Build logic to search and match faces of people in a photo with the library that was previously uploaded.

The cool news is that I was able to implement all that logic. The full solution can be found here.

So to start, I focused on the first use case, and Azure Cognitive services has this concept of “PersonGroups” that can be leveraged with the SDK. For starters you need to install the sdk from nuget, and this is the required package.


the first part is the key part is the client, which I configured in a parent class as follows:

public class BaseFaceApi
        protected string _apiKey = ConfigurationManager.AppSettings["FaceApiKey"];
        protected string _apiUrl = ConfigurationManager.AppSettings["FaceApiUrl"];
        protected string _apiEndpoint = ConfigurationManager.AppSettings["FaceApiEndpoint"];
        protected FaceClient _client;

        protected void InitializeClient()
            _client = new FaceClient(new ApiKeyServiceClientCredentials(_apiKey));
            _client.Endpoint = _apiEndpoint;

This allows for configuration to be the app.config, and this face client will be leveraged for all operations to hit the API.

The Face API leverages this concept of “PersonGroups” and “Persons” to handle the library of faces you are going to compare against. The process is broken into 4 parts.

  • Create the group
  • Create the person
  • Register Images for that person
  • Train the Model

If you review the source code you will find that I have broken these out to separate methods. The benefit of creating the groups is that you can limit your searching to specific groups, and have your application recognize the differences between groups.

Once you completed loading these images and “Persons” into the service you are ready to search through this repository by uploading an image. This is done with the following code:

public async Task<Dictionary<Guid,FacePerson>> IdentifyFaces(string filePath,string groupID)

            Dictionary<Guid,FacePerson> ret = new Dictionary<Guid, FacePerson>();

            using (Stream s = File.OpenRead(filePath))
                // The list of Face attributes to return.
                IList<FaceAttributeType> faceAttributes =
                    new FaceAttributeType[]
            FaceAttributeType.Gender, FaceAttributeType.Age,
            FaceAttributeType.Smile, FaceAttributeType.Emotion,
            FaceAttributeType.Glasses, FaceAttributeType.Hair

                var facesTask = await _client.Face.DetectWithStreamWithHttpMessagesAsync(s,true,true,faceAttributes);
                var faceIds = facesTask.Body.Select(face => face.FaceId.Value).ToList();

                var identifyTask = await _client.Face.IdentifyWithHttpMessagesAsync(faceIds,groupID);
                foreach (var identifyResult in identifyTask.Body)
                    Console.WriteLine("Result of face: {0}", identifyResult.FaceId);
                    if (identifyResult.Candidates.Count > 0)
                        // Get top 1 among all candidates returned
                        var candidateId = identifyResult.Candidates[0].PersonId;
                        var person = await _client.PersonGroupPerson.GetWithHttpMessagesAsync(groupID, candidateId);

                        var fp = new FacePerson();
                        fp.PersonID = person.Body.PersonId;
                        fp.Name = person.Body.Name;
                        fp.FaceIds = person.Body.PersistedFaceIds.ToList();

                        var faceInstance = facesTask.Body.Where(f => f.FaceId.Value == identifyResult.FaceId).SingleOrDefault();
                        fp.Age = faceInstance.FaceAttributes.Age.ToString();
                        fp.EmotionAnger = faceInstance.FaceAttributes.Emotion.Anger.ToString();
                        fp.EmotionContempt = faceInstance.FaceAttributes.Emotion.Contempt.ToString();
                        fp.EmotionDisgust = faceInstance.FaceAttributes.Emotion.Disgust.ToString();
                        fp.EmotionFear = faceInstance.FaceAttributes.Emotion.Fear.ToString();
                        fp.EmotionHappiness = faceInstance.FaceAttributes.Emotion.Happiness.ToString();
                        fp.EmotionNeutral = faceInstance.FaceAttributes.Emotion.Neutral.ToString();
                        fp.EmotionSadness = faceInstance.FaceAttributes.Emotion.Sadness.ToString();
                        fp.EmotionSurprise = faceInstance.FaceAttributes.Emotion.Surprise.ToString();
                        fp.Gender = faceInstance.FaceAttributes.Gender.ToString();

                        ret.Add(person.Body.PersonId, fp);

            return ret;

One key note above is the face attributes, this identifies the attributes you would like the service to review and discover. You can limit this list as you like.

Please feel free to review the sample code and I hope you find a great use-case. For me, a very cool project that is on my list next is to build a camera with a raspberry pi that captures people who come to the door and compares them against a known database of people.

It’s also worth mentioning that this service is fully available in Azure Government for customers that have requirements to be deployed in a sovereign cloud.

How to punch up your resume?

How to punch up your resume?

So I thought given the new direction with this blog, I would focus my attention on some of the questions I get a lot.  And one of the  biggest questions I get asked frequently is “My resume is terrible, how do I make it better?”

To be perfectly honest, most people undervalue their resume, and think of it like some kind of checkbox.  I love hearing people say “I’m not worried, once I go in for the interview the resume is meaningless.”  To which my response is HOW DO YOU THINK YOU GET THE INTERVIEW!

There’s an old adage, that the you never get a second chance to make a first impression, and when applying for jobs, the resume is your first impression.  When I worked for a prior company, part of my job was interviewing new talent and determining if they were a good fit to move the organization forward.  As such, I literally conducted over 100 interviews in a 8 month a period.  I can say I’ve seen a lot of things, and this blog post is really based around the tips that would apply to help get your resume noticed and get you in the interview.

  1. DO NOT stick to one page:  In college they will tell you that your resume must be limited to one page.  That is not realistic for a technical position, because in these positions we are looking for the skills you have.  Don’t go crazy but a good three page resume is a lot better than Times New Roman, size 8 compressed onto a page.  The human eye needs white space more than anything.
  2. Keep it up to date:  This is jumping a little further ahead, but make sure it is 100% current.  I’ve read resumes of people and nothing turns the interviewer off more than to bring you in and here, “Here’s the stuff that I’ve been working on”.
  3. Describe the projects:  Even better than a list of skills is a project description, and acknowledging that you can’t give up all details.  But things like, “Project XYZ was a mobile app built with Xamarin with a Cosmos DB database back end, and I was the lead developer of the mobile side.” tells me a lot about what your skills are.
  4. Be clear about your role:  It helps if you tell me what you did on the project, and be clear about the responsibility not the titles.  I’ll give an example, my first job I was responsible for building software for managing test centers and grading certification exams with the state, but being the state my job title was “LAN Technican”, no even close.  So I found that you should try to change your title, just list what you did on the project.  It gives a clearer picture of what your skills are.
  5. Put in personal projects:  I used to tell people “I can teach someone to code the way I want, but I can’t teach passion”.  So if you’ve contributed to GitHub projects, put it in there, if you have apps in the app store, put them in there.  Talk to me about what you with, that shows perseverance and drive, which I can’t teach.  If you blog list that, if you work with user groups, put that.  I once had a candidate show “my son and I built a cloud enabled race car with a raspberry pi and a cell phone”, that’s fantastic information.  But make sure you limit it to what you’ve done.
  6. Be Honest about how much you’ve worked with something:  It’s a great idea to quantify your technical skills, you can use a 1-10 scale, or some other measure, on my resume I use a 1-5 scale.  This allows them to get a good assessment of your skills and saves everyone time.  And to be honest this is another one where you show “I’m learning Xamarin on my own” is huge.  Expect that during the technical interview you are going to be grilled on all these, and if you aren’t honest, that’s a guaranteed out (next post we talk about the technical interview).