Leveraging Private Dev Containers

Leveraging Private Dev Containers

So this should be a pretty quick post, but I thought I would share this tip and trick I found while playing around with the implementation.

Dev Containers really are an amazing advancement in the development tools that are out there. Gone are the days of being handed a massive document and spending a day or two configuring your local development machine.

Dev containers make it easy to leverage docker to spin up and do development inside a container, and then putting that container reference into your repo to be rendered.

Now, the problem becomes, what if you have private python packages or specific internal tools that you want to include in your dev container? What can you do to make it easier to developer to leverage?

The answer is, you can host a container image on a registry that is private and exposed to your developers via their azure subscription. The benefit to this is it makes it easy to standardize the dev environment with internal tools, and make it easy to spin up new environments without issue.

So the question becomes “How?” And the answer is a pretty basic one. If you follow the spec defined here, then you will see in the devcontainer spec for the json file, there is an initializeCommand option, which allows you to specify a bash script to run during the initialization of the container.

But inside that script, you can add the following commands to make sure your dev container works:

az login --use-device-code
az acr login --name {your registry name}
docker pull {repository/imagename}:latest

And then when you build the DockerFile, you just point to your private registry. This means that whenever your team is able to start up their dev container, they will get a login prompt to enter the code, and log into the private docker registry. And that’s it!

Having a bias towards delivery

Having a bias towards delivery

So I’ve been having a lot of conversations lately about Scrum, and agile processes and it got me thinking. What is the difference between an effective agile team, and an ineffective one? What is it that makes some projects real excel in an agile world, and others not.

And as I was thinking about this, it got me thinking of the most effective developers I know. I’ve had the privilege of working with some really inspirational developers and engineers in my career and the more I thought about it, the more I realized there was a common thread between them all.

They all had a bias towards delivery.

So what exactly does that mean? And how does that lead to success in this field? Let’s start with the term bias, according to Merriam Webster, a bias is “an inclination of temperament or outlook.” So the bias is a focus on what we are inclined to do.

So when I’m talking about a bias towards delivery, I am talking about focusing our efforts with an inclination towards what the delivery will be, or what the outcome of the effort will be.

Isn’t this just Agile?

One of the most common problems in Agile development is that many of the team members or development teams, will often just approach scrum as a “take a checkpoint every two weeks.” Or how often we have this discussion of the idea of “definition of done.” All of this built around how we break up work, and that’s all valuable, but all too often I still see this fail because it fails to prioritize shipping software over going through the motions.

So as much as those activities are valid, I do feel like all too often the delivery element is lost in the discussion. IF you look at the Agile Manifesto, the creators of this philosphy saw this all too clearly. The very first principle of agile is:

Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.

Principles behind the Agile Manifesto


So this is all great discussion but let me get tactical on how this all too often is a problem for software engineering teams. How many times have you seen user stories that have no acceptance criteria or worse just say “meets definition of done.” This does not cover or provide definition for what the delivery is at the end. This just says some boilerplate cut-line that’s supposed to be a guide.

Let’s take a couple of concrete examples, first you have the following as a User Story:

  • Title: As a user, I should be able to edit my profile and make changes to my contact information.
  • Description: The user should be able to update contact information in their profile including email address, phone number, or twitter handle.
  • Acceptance Criteria: The solution will include all required changes, and appropriate unit tests, and conform with our definition of done.

Now to some people that looks perfectly fine and reasonable, but I actually argue that it is very ambiguous and doesn’t have a focus on delivery. My biggest question, “What is the expected delivery at the end of the sprint?” For example I would ask the following questions:

  • Is the expectation that I just have a PR Submitted?
  • Does it need to be merged?
  • What should a demo of this to the product owner look like?

It is not immediately clear looking at this work item what delivery looks like of this solution. Now this one isn’t terrible, and we can probably derive a lot of the information from standard practices in an established team, but the next example I have is much worse. The most common culprit I’ve found is the infamous “Research Spike”.

You’ve probably seen stories like this before:

  • Title: Spike – Research options of using PostGres vs Cosmos for Geospatial queries.
  • Description: Evaluate and compare the benefits of using PostGres as a GIS data store vs Cosmos DB for Geospatial queries.
  • Acceptance Criteria: Complete evaluation of technologies to help empower architectural decision.

Now this is really not a good use of effort, and provides almost zero guard rails the developer or support for timeboxing this evaluation. There is no real way to discern what specifically the evaluation would entail, and leads to things like “Moving this evaluation forward as we could use more time.”

A great way to change this I would argue is to focus on what will the delivery in 2 weeks look like. And also get very specific about what the focus of the spike will be. So if we change the above to something like this:

  • Title: Spike – Research options of using PostGres vs Cosmos for Geospatial queries.
  • Description: Evaluate and compare the benefits of using PostGres as a GIS data store vs Cosmos DB for Geospatial queries. Evaluation will provide the following key answers:
    • What are the potential performance gains to query execution times?
    • What new features will be available after changing platforms?
    • What potential feature loss could occur with the move?
    • What are the cost implications of the switch?
  • Acceptance Criteria: Delivery will include a report focusing on the questions above a Proof of Concept showing the relevant details from the report during our sprint demos.

Looking at the above, it becomes very clear what the expectations of the spike, and the delivery at the end of the two weeks looks like. It also means that the odds of this item “moving to next sprint” is very low. There is a possibility of this leading to additional research spikes, but it focuses the effort on what should be delivered.

How can I start adopting this approach?

At the most basic level a common practice of many of the great engineers I’ve had the privilege to work with is that they start the sprint by looking at each work item and saying, “What am I going to deliver at the end of the sprint for this?” And they decide on what that is, and work backwards to what they need to do to get there. I would recommend starting there and aligning your mindset towards what you are ultimately going to deliver. This will cause you to ask more questions of the work items your assigned and push you towards more the tangible delivery items that you should be focused on.

Building a magic mirror – Part 1 – The project

Building a magic mirror – Part 1 – The project

For this post, I thought I would share a project I’ve been working on for my family. For our family, we have a marker board in the kitchen that helps keep track of everything going on for myself, my wife and our kids. And while this is great in practice and does help. The fact that this is analog has been driving me nuts for YEARS. So I wanted to see if we could up this with a magic mirror.

Now I have a magic mirror in my office that I use to help stay focused, and I have here how I manage it via Azure Dev Ops. But I’ve never really done a post detailing how I built this mirror for those interested.

First Hardware Requirements:

In my case I’m using an old Raspberry Pi 3 that I happen to have just sitting around the office, and I’ve installed the Raspberry Pi linux OS on that device.

Outside of that, I’ve got the basics:

  • Power supply cable
  • HDMI Capable
  • Monitor
  • 64 GB Micro SD card
  • SD Card Reader

Now I have plans to hook this to a larger TV when I set it up in the kitchen, but for right now I’ve just got a standard monitor.

Goal of the Project

For me, I found this video on YouTube, and thought it was pretty great, so this is my starting point for this project.

Setting up the Raspberry PI – Out-of-the-Box

To download and install the OS, I used the Raspberry Pi image manager found here. I used the SD card reader I had in the office to format the SD card, and then install the OS.

Once that was completed, I booted up my raspberry pi, and finished the setup which involved the following (there is a wizard to help with this part):

  • Configure localization
  • Configure Wifi
  • Reset password
  • Reset Hostname
  • Download / Install Updates

Finally, one step I did to make my life easier is to enable SSH to the raspberry pi, which allows me to work on it from my laptop rather than setting up a keyboard / monitor / mouse permanently.

You do this by going to the “Settings” on the Raspberry Pi, and going to the “Interface” tab, and selecting “Enable” for “SSH”.

Now that my Raspberry Pi is running, we come to the meat of this, and that’s getting the magic mirror running:

Step 1 – Install Node.js

You need Node.JS to run everything about the magic mirror, so you can start by running these commands against your raspberry pi:

curl -sL https://deb.nodesource.com/setup_16.x | sudo -E bash -
sudo apt install -y nodejs

From there, I cloned the magic mirror repo to Raspberry Pi.

git clone https://github.com/MichMich/MagicMirror

Then enter the repo from the command prompt:

cd MagicMirror/

Then you need to install NPM to be able to work with the MagicMirror installation. This takes the longest, and the first time I ran this I actually had to use sudo to make sure it completed the install.

sudo npm install

A good recommendation from the magic mirror site is to copy the default config so you have a backup. You can do that with this command:

cp config/config.js.sample config/config.js

Finally you can start your MagicMirror with:

npm run start

Now, the next part was tricky, if you reboot your RaspberryPi, the magic mirror will not start automatically, and you need to do some more configuration to make that happen. Most documentation will tell you to use pm2, and I would agree with that, but if you try to run the commands on the most recent Raspberry Pi, you’ll find that pm2 is not installed. You can resolve that with this command:

npm install pm2 -g

Then run the following commands to configure your MagicMirror to run on startup.

pm2 startup

After running this command you will be given a command to run to enable pm2 on startup, run this command.

Then run the following:

cd ~
nano magicmirror.sh

Put the following in the magicmirror.sh file, and then hit Ctrl-X then Y

cd ~/MagicMirror
DISPLAY=:0 npm start

Finally run these commands to finish configuration

chmod +x magicmirror.sh
pm2 start magicmirror.sh

pm2 save
sudo reboot

After that you’ll see your monitor displaying the default configuration for the MagicMirror.

Next post I’ll walk you through the steps I took for configuring some of the common modules to get it working.

Reconciling the Temporal Nature of Software Development

Reconciling the Temporal Nature of Software Development

Stop me if you heard this one before, you are working on a code project, and you really want to make this one perfect. You spend a ton of time brainstorming ideas to make this code “future proof”, this code is going to be elegant and perfect, and the code that will last forever.

Or here’s another one, you open up a project you worked on a long time ago, and when you look at the code it is just awful. I once saw a quote in a software comment that says, “When I wrote this, only God and I knew how it worked, now only God knows.”

Now both of these statements are extremely common among a lot of engineers I know and have worked with, and I’ve fallen into these traps myself (a lot), way more than I care to admit.

But over the past year, I’ve come to the conclusion, that these types of behaviors are fool’s errands.

I can’t tell you the number of times that I’ve seen this confirmed for, and that these type scenarios never lead to a positive outcome. And if we’re being honest, the past decade with the adoption of agile processes, in many ways addressed these same problems. Waterfall as a development methodology is built on this falsehood, so why do we continue to chase this “Holy Grail” that always turns out so poorly.

Realistically, I have found that these situations lead to either:

  • Setting unrealistic expectations and forcing additional stress on yourself to deliver.
  • Making it really easy to fall into imposter syndrome.

Now I’m not writing this and claiming to be some kind of psychology expert, all I wanted to do here is share my own thoughts and feelings on this, and honestly your mileage may vary, but this is based on my experience.

The simple fact I’ve come to realize is that Software Development is a temporal activity, like any act of creation. At the end of the day, all you are capable of doing is creating the best code that you can at the present moment. Period.

Any act of creation, whether it’s writing, art, architecture, etc, all have one thing in common, once you go through the process, they are frozen in time. Let’s face it, the Mona Lisa isn’t going to change, and any work being done on it is strictly to maintain it.

When you boil it down, at the project level, Agile focuses on this through the concept of “Definition of Done”, or a “just ship it” mentality.

But I would argue that this mindset needs to extend much further to help prevent ourselves from inflicting burnout upon ourselves. Carol Dweck talks about this in her growth mindset to a certain extent, specifically questioning the idea that talent is fixed, and pointing out that we as humans have the ability to grow in our ability to do the things we care about.

Let me give you an example, there are whole college courses that talk about the differences between Van Gough’s work over the course of his career. The simple fact is that every day we get better at our craft. So ultimately, it’s important to embrace that coding is no different.

My point in this post is this…it’s not worth putting the stress on yourself to build something that’s going to “stand the test of time.” Remember that at the end of the day, the best you can do is the intersection of these constraints:

  • Resources – The tools you have at your disposal.
  • Skill – Your ability to do the thing you are trying to do.
  • Knowledge – Your knowledge of the problem being addressed, or the environment your work will live in.
  • Time – How much time you have to create the thing.
  • Focus – The number of distractions getting in your way.
  • Desire – How much your heart is in the work.

These 6 pillars are ultimately the governing constraints that will determine the level of work your going to do.

I have been writing and re-writing this post for a while, but as we approach the end of 2021, I’m feeling reflective. My point is this, when you do your work, or build your solution you need to embrace the idea that you are not going to build the digital equivalent of the Great Wall of China, and your work is not going to stand the test of time. There will be new technologies, techniques, and even learnings you will be bringing back. So don’t put yourself through that pain, rather do the best job you can, within those 6 pillars, and move onto the next thing.

When you start to consider this, if you’re like me, you will realize that you are free to do the best you can, and not put that additional pressure on yourself.

The joke I tell people is this:

  • Past Kevin is an idiot who writes terrible code and makes bad choices. And he likes to procrastinate.
  • Future Kevin is a genius who can solve any problem in an elegant and amazing manner.
  • Present Kevin is stuck in the middle, doing the best he can, and trying to figure out how to get it done.

I wish you all the best in the coming year, and hope you have a great holidays.

Cool Nerdy Gift Idea – Word Cloud

Cool Nerdy Gift Idea – Word Cloud

The holidays are fast approaching, and this year I had a really cool idea for a gift that turned out well, and I thought I would share it. For the past year and a half, I’ve had this thing going with my wife where every day I’ve sent her a “Reason X, that I love you…” and it’s been a thing of ours that’s been going for a long time (up to 462 at the time of this post).

But what was really cool was this year for our anniversary I decided to take a nerdy approach to making something very sentimental but easy to make. Needless to say, it was very well-received, and I thought I would share.

What I did was used Microsoft Cognitive Services and Power BI to build a Word Cloud based on the key words extracted from the text messages I’ve sent her. Microsoft provides a cognitive service that does text analytics, and if you’re like me you’ve seen sentiment analysis and other bots before. But one of the capabilities, is Key Word Extraction, which is discussed here.

So given this, I wrote a simple python script to pull in all the text messages that I exported to csv, and run them through cognitive services.

from collections import Counter
import json 

key = "..."
endpoint = "..."

run_text_analytics = True
run_summarize = True 

from azure.ai.textanalytics import TextAnalyticsClient
from azure.core.credentials import AzureKeyCredential

class KeywordResult():
    def __init__(self, keyword, count):
        self.keyword = keyword
        self.count = count 

# Authenticate the client using your key and endpoint 
def authenticate_client():
    ta_credential = AzureKeyCredential(key)
    text_analytics_client = TextAnalyticsClient(
    return text_analytics_client

client = authenticate_client()

def key_phrase_extraction(client):

        if (run_text_analytics == True):
            print("Running Text Analytics")
            with open("./data/reasons.txt") as f:
                lines = f.readlines()

                responses = []
                for i in range(0, len(lines),10):
                    documents = lines[i:i+10]
                    response = client.extract_key_phrases(documents = documents)[0]

                    if not response.is_error:
                        for phrase in response.key_phrases:
                            #print("\t\t", phrase)
                            responses += [phrase]
                        print(response.id, response.error)
                # for line in lines:
                #     documents = [line]

                with open("./data/output.txt", 'w') as o:
                    for respone_line in responses:
            print("Running Text Analytics - Complete")
        if (run_summarize == True):
            print("Running Summary Statistics")
            print("Getting output values")
            with open("./data/output.txt") as reason_keywords:
                keywords = reason_keywords.readlines()
                keyword_counts = Counter(keywords)
                print("Counts retrieved")

                print("Building Keyword objects")
                keyword_list = []
                for key, value in keyword_counts.items():
                    result = KeywordResult(key,value)
                print("Keyword objects built")

                print("Writing output files")
                with open("./data/keyword_counts.csv","w") as keyword_count_output:
                    for k in keyword_list:
                        print(f"Key = {k.keyword} Value = {k.count}")
                        key_value = k.keyword.replace("\n","")
                        result_line = f"{key_value},{k.count}\n"
                print("Finished writing output files")
    except Exception as err:
        print("Encountered exception. {}".format(err))

Now with the above code, you will need to create a text analytics cognitive service, and then populate the endpoint and the key provided. But the code will take each row of the document and run it through cognitive services (in batches of 10) and then output the results.

From there, you can open up Power BI and point it at the text document provided, and connect the Word Cloud visual, and you’re done. There are great instructions found here if it helps.

It’s a pretty easy gift that can be really amazing. And Happy Holidays!

Errors and Punishment – The Art of Debugging

Errors and Punishment – The Art of Debugging

So recently I have been blessed enough to talk to several people, who are new to the software development field, and been able to do some mentoring. And firstly, I’m the one that’s lucky for this, as there are few things better than meeting with people who are new to this industry and getting to engage with their ideas. If it isn’t something you do regularly, you should start.

But one of the things that has become very much apparent to me, just how little time is spent actually teaching how to debug. I know I saw this when I was teaching, but there’s this tendency by many in academy to show students how to code, and when they run into errors show them how to fix them. Which at it’s core sounds like “Yes, Kevin that’s what teachers do…” but I would actually argue it is a fundamentally flawed principle. The reason being that error messages and fixing things that are broken is a pretty large part of being a developer, and by giving junior developers the answer, we are doing the preverbal “giving them a fish, rather than teaching them to fish.”

To that end, I wanted to at least start the conversation on the a mindset for debugging, and how to figure out what to do when you encounter an error. Now obviously I can’t cover everything, but I wanted to give some key tips to how to approach debugging when you have an error message.

Honestly, debugging is a lot like a police procedural, and it’s a good way to remember the steps, so hang with me through the metaphor.

Tip #1 – Start at the Scene of the Crime – The Error Message

Let’s be honest

Now I know this sounds basic, but you would be surprised how often even senior devs make this mistake. Take the time to stop, and really read the error message and what I mean by that is do the following:]

  • What does the error message tell you?
  • Can you find where the error is occurring?
  • Is there a StackTrace?
  • What component or microservice is throwing the error?
  • What is the error type?

Looking at an error message is not just reading the words of the error, but there are usually other clues that can help you solve the mystery. Things such as the exception type, or a stack trace where you can find the exact line of the code is going to be critical.

Honestly, most people just read the words and then start making assumptions about where an error occurred. And this can be dangerous right out of the gate.

Tip #2 – Look for Witnesses – Digging through logs

Now, in my experience an error message is only 1 piece of the puzzle / mystery, the next step is to really look for more information. If you think about a police procedural on TV, they start at crime scene, but what do they do next…talk to witnesses!

Now, in terms of debugging we have the added benefit of being able to refer to logs. Most applications have some form of logging, even if it’s just outputting messages to a console window, and that information can be very valuable in determining an error message’s meaning.

Start looking for logs that were captured around the same time, specially looking for:

  • What was occurring right before the error?
  • What data was being moved through the solution?
  • What was the request volume that the system was handling?
  • Were there any other errors around the same time?

Any information you can find in the logs is critical to identifying and fixing the issue.

Tip #3 – Deal only in facts

Now this next on, is absolutely critical, and all to commonly overlooked. Many developers will start making assumptions as this point, and start immediately announcing, I know what it is and start changing things. Resist this urge, no matter what.

Now, I’m not going to lie, some errors are easy and with a little bit of searching it becomes really easy to see the cause and address it, and if you are 100% sure, that should be the case. But I would argue in the TV procedural perspective, this is the different between the rookie and the veteran. If you are new to this field, resist the urge to jump to an answer and only deal in facts.

What I mean by this is to not start letting your jumping to conclusions cloud the story you are building of what occurred and why.

Tip #4 – Keep a running log of findings and things you tried

This is something I do, that I started and it pays dividends. Just like the cops in a police procedural, they make a case file as soon as they capture their original findings, and you should to. Keep a running document, either in word, or for me I use OneNote. I will copy into that document all the findings.

  • Error Messages
  • Relevant Logs
  • Configuration Information
  • Dates / times of the errors occurring
  • Links to documentation

Anything I find and I will keep appending new information to the document as I find it.

Tip #5 – Look for changes

The other key piece of evidence most people overlook is the obvious question of “What changed?” Code is static, and does not degrade at the code level overtime. If it was working before and isn’t anymore, something changed. Look for what might have changed in the solution:

  • Was code updated?
  • Were packages or libraries updated?
  • Was a dependency updated?
  • Was their a hardware change?

All of this is valuable evidence to helping to find your reason.

Tip #6 – Check documentation

A good next step is to check any documentation, and what I mean by this is look to any reference material that could explain to you how the code is supposed to work. This can include the following:

  • Documentation on libraries and packages
  • ReadMe / GitHub issues / System Docs
  • Code Comments

Anything can help you better understand how the code is supposed to work and identify the actual way the code is supposed to behave.

Tip #7 – Trust Nothing – Especially your own code

At this stage, again people like to make assumptions, and I can’t tell you the number of times I have done this personally, but you stare at code and say it doesn’t make sense. I know X, and Y, and Z are correct, so why is it failing? Only to find out one of your assumptions about X, Y, or Z was false. You need to throw all assumptions out the window and if necessary go and manually verify everything you can. This will help you identify the underlying problem in the end.

Also at this stage I see the other common mistake. Keep your ego out of debugging. Many developers will look at the code they’ve built and they trust it because they built it. But this bias is usually the most damaging to your investigation.

Similar to the running joke of “The husband always did it…” I recommend adopting the philosophy of “Guilty until proven innocent” when it comes to any code you write. Assume that something in your code is broken, and until you can prove it, don’t start looking elsewhere. This will help in the long run.

Let me give an example, let’s say I am building code that hits an API, and I write my code and it looks good to me, and I go to run it and I get back a 404 error saying not found. I’ve all too often seen devs that would then ping the API team to see if their service is down, or networking to see if something is blocking the traffic, all before even looking to see “Did I get the endpoint right?”

Doing this makes you look foolish, and wastes people’s time. It’s better to verify that your code is working properly, and then that will empower you to have that conversation with networking as:

You: “I think it’s a networking issue.”

Network Engineer: “Why do you think that?”

You: “I’ve done the following to rule out anything else…so I think it could be ________________”

Tip #8 – Try to reproduce in isolation / Don’t Make it a hatchet job!

If you get stuck at this point, a good trick I find is to try and reproduce the error in isolation, especially when you are looking at a microservice architecture, there can be a lot of moving parts. But it can be helpful to try and recreate an error away from the existing code base by isolating components. This can make things easier to give evidence, and not unlike a police procedural where they try to reproduce the events of a theory, it can be a great way to isolate a problem.

The one thing to try really hard to avoid, is taking a hatchet to code, all too many times I’ve seen people start doing this pattern to solve a problem:

  • I’m going to try this…
  • Run Code
  • Still Broken…
  • Change this…
  • Run Code
  • Still Broken…

You are actually making your life harder by not being methodical, now I’m not saying don’t try things, but try to be more deliberate and make sure you take time to log your thoughts and attempts if your running log. This can be critical to keeping things logical and methodical and not spinning your wheels.

Tip #9 – When you find the answer right it down.

When you finally find the answer, there is this tendency to celebrate, and push that commit, cut that PR and be done. But really your not doing yourself any favors if you stop there. I find it helpful to make sure you take the time to answer the following:

  • Do I fully understand why this occurred?
  • Can I document and explain this?
  • Am I convinced this is the best fix for this problem?

Really you want to make sure you have a full understanding and complete your running log by documenting the findings so that you can refer to them in the future.

Tip #10 – Make it easier and test in the future

The other thing that is largely overlooked and skipped due to the “Fix Celebration” is the debrief on the issue. All to often we stop and assume that we are done because we made the fix. But really we should be looking at the following:

  • Is there an automated way I can test for this bug?
  • How will I monitor to make sure my fix worked?
  • Does this hot fix require further work down the line?
  • Does this fix introduce any technical debt?
  • What can I do to make this type of error easier to debug in the future?
  • What parts of the debug and testing cycle made it hard to identify this error?
  • What could I have done differently to make this go faster?
  • What does this experience teach me?

These kinds of questions are critical to ongoing success in your software development career and the health of your project longer term.

I hope you found these 10 tips helpful!

How to leverage a private modules in your YAML Pipelines

How to leverage a private modules in your YAML Pipelines

I’ve made no secret about my love of DevOps, and to be honest, over the past few months it’s been more apparent to me than ever before that these practices are what makes developers more productive. And taking the time to setup these processes correctly are extremely valuable and will pay significant dividends over the life of the project.

Now that being said, I’ve also been doing a lot of work with Python, and honestly I’m really enjoying it. It’s one of those languages that is fairly easy to pickup but the options and opportunities based on it’s flexibility take longer to master. But one of the things I’m thankful we started doing was leveraging python modules to empower our code re-use.

The ability to leverage pip to install modules into containers creates this amazing ability to separate the business logic from the compute implementation.

To that end, there’s a pretty common problem, that I’m surprised is not more documented. And that’s if you’ve built python modules, and deployed them to a private artifact feed, how can you pull those same modules into a docker container to be used.

Step 1 – Create a Personal Access Token

The first part of this is creating a personal access token in ADO, which you can find instructions for here. The key to this though is the PAT must have access to the packages section, and I recommend read access.

Step 2 – Update DockerFile to accept an argument

Next we need to update our Dockerfile to be able to accept an argument so that we can pass that url. You’ll need to build out the url your going to use by doing the following:

https://{PAT}@pkgs.dev.azure.com/{organization}/{project id}/_packaging/{feed name}/pypi/simple/

Step 3 – Update YAML file to pass argument

This is done by adding the following to your docker file:

ARG feed_url=""
RUN pip install --upgrade pip
RUN pip install -r requirements.txt --index-url="${feed_url}"

The above will provide the ability to pass the url required for accessing the private feed into the process of building a docker image. This can be done in the YAML file by using the following:

- task: Bash@3
    targetType: 'inline'
    script: 'docker build -t="$(container_registry_name)/$(image_name):latest" -f="./DockerFile" . --build-arg feed_url="$(feed_url)"'
    workingDirectory: '$(Agent.BuildDirectory)'
  displayName: "Build Docker Image"

At this point, you can create your requirements file with all the appropriate packages and it will build when you run your automated build for your container image.

How to leverage templates in YAML Pipelines

How to leverage templates in YAML Pipelines

So now secret that I really am a big fan of leveraging DevOps to extend your productivity. I’ve had the privilege of working on smaller teams that have accomplished far more than anyone could have predicted. And honestly the key principle that is always at the center of those efforts is treat everything as a repeatable activity.

Now, if you look at the idea of a micro service application, at it’s core its several different services that are independently deployable, and at it’s core that statement can cause a lot of excessive technical debt from a DevOps perspective.

For example, if I encapsulate all logic into separate python modules, I need a pipeline for each module, and those pipelines look almost identical.

Or if I’m deploying docker containers, my pipelines for each service likely look almost identical. See the pattern here?

Now imagine, you do this and build a robust application with 20-30 services running in containers. In the above, that means if I have to change their deployment pipeline, by adding say a new environment, I have to update between 20 – 30 pipelines, with the same changes.

Thankfully, ADO has an answer to this, in the use of templates. The idea here is we create a repo within ADO for our deployment templates, which contain the majority of the logic to deploy our services and then call those templates in each service.

For this example, I’ve built a template that I use to deploy a docker container and push it to a container registry, which is a pretty common practice.

The logic to implement it is fairly simple and looks like the following:

    - repository: templates
      type: git
      name: "TestProject/templates"

Using the above code will enable your pipeline to pull from a separate git repo, and then you can use the following to code to create a sample template:

  - name: imageName
    type: string
  - name: containerRegistryName
    type: string

  - name: repositoryName
    type: string

  - name: containerRegistryConnection
    type: string

  - name: tag
    type: string

- task: Bash@3
    targetType: 'inline'
    script: 'docker build -t="${{ parameters.containerRegistryName }}/${{ parameters.imageName }}:${{ parameters.tag }}" -t="${{ parameters.containerRegistryName }}/${{ parameters.imageName }}:latest" -f="./Dockerfile" .'
    workingDirectory: '$(Agent.BuildDirectory)/container'
  displayName: "Building docker container"

- task: Docker@2
    containerRegistry: '${{ parameters.containerRegistryConnection }}'
    repository: '${{ parameters.imageName }}'
    command: 'push'
    tags: |
  displayName: "Pushing container to registry"

Finally, you can go to any yaml pipeline in your project and use the following to reference the template:

- template: /containers/container.yml@templates
    imageName: $(imageName)
    containerRegistryName: $(containerRegistry)
    repositoryName: $(repositoryName)
    containerRegistryConnection: 'AG-ASCII-GSMP-boxaimarketopsacr'
    tag: $(tag)
Poly-Repo vs Mono-Repo

Poly-Repo vs Mono-Repo

So I’ve been doing a lot of DevOps work recently, and one of the bigger topic of discussions I’ve been a part of recently is this idea of Mono-Repo vs Poly-Repo. And I thought I would way in with some of my thoughts on this.

So first and foremost, let’s talk about what the difference is. Mono-Repo vs Poly-Repo, actually refers to how you store your source control. Now I don’t care if you are using Azure Dev Ops, GitHub, BitBucket, or any other solution. The idea here is whether you put the entirety of your source code in a single repo, or if you split it up into multiple repositories.

Now this doesn’t sound like a big deal, or might not make sense depending on the type of development code, but this also ties into the idea of Microservices. If you think about a micro-services, and the nature of them, then the debate about repos becomes apparent.

This can be a hot-debated statement, but most modern application development involves distributed solutions and architectures with Micro-services, whether you are deploying to a server-less environment, or even to Kubernetes, most modern applications involve a lot of completely separate micro-services that provide the total functionality.

And this is where the debate comes into play, the idea that let’s say your application is actually made of a series of smaller micro service containers that are being used to completely overall functionality. Then how do you store them in source control. Does each service get it’s own repository or do you have one repository with all your services in folders.

When we look at Mono-Repo, it’s not without benefits:

  • Easier to interact with
  • Easier to handle changes that cut across multiple services
  • Pull Requests are all localized

But it isn’t without it’s downsides:

  • Harder to control from a security perspective
  • Makes it easier to inject bad practices
  • Can make versioning much more difficult

And really in a lot of ways Poly-Repo can really read like the opposite of what’s above.

For me, I prefer poly-repo, and I’ll tell you why. Ultimately it can create some more overhead, but I find it leads to better isolation and enforcement of good development practices. But making each repo responsible for containing a single service and all of it’s components it makes for a much cleaner development experience and makes it much easier to maintain that isolation and avoid letting bad practices slip in.

Now I do believe is making repos for single purposes, and that includes things like a templates repo for deployment components and GitOps pieces. But I like the idea that to make a change to a service the workflow is:

  • Clone the Services Repo
  • Make changes
  • Test changes
  • PR changes
  • PR kicks off automated deployment

It helps to keep each of these services as independently deplorable in this model which is ultimately where you want to be as opposed to building multiple services at once.

How to start a New Job in a Pandemic

How to start a New Job in a Pandemic

So if you find me on LinkedIn, you know that I recently changed job at my current company. And changing roles is something that many of us have had to deal with several times in our careers. Even if you’ve been able to work at the same place for most of your career, it’s more than likely that you’ve had to change roles, or even teams at one point.

Now this time was a little different, mainly because it was in a pandemic. In previous roles, I would say I was a “mobile worker” and then a mix of “remote and mobile” (probably a blog post on the differences coming). But in all cases, I would go to an office for the initial transition, and at least meet people face to face. But thanks to COVID-19, that became impossible.

So this was the first time I’ve had to transition jobs in a completely remote position. And more than that I’ve noticed I’m not alone, having friends who are going through the exact same thing. Now through this experience, there were somethings I was able to do to help myself, and other things I learned for next time. And in this post I hope to capture both. So here are some tips and tricks to starting a new job in a 100% remote capacity.

Tip #1 – Make sure you fully transitioned

This is the first thing I will point out, and may not apply to everyone. If you are changing roles at your current company though, this can definitely be important. As you gear up to transition from one role to another, it’s especially important that you make sure you are able to close out your current responsibilities to prepare for your new roles responsibilities. This sounds like common sense, but I have seen in the past how this can go horribly wrong.

Because you are a remote worker, its easy for some manager or team mates to think that because you aren’t physically moving, or are still working for the same organization that you can be available after your start date at your new position. This is a slippery slope in my experience that should be avoided. The simple fact is this, its not fair to you, your previous team, or your new team for your attention to be divided. Now I’m not saying you should be a jerk about this, and it’s a scorched earth policy when you leave your old team. But I am seeing be cautious about taking on responsibilities after you start your new position.

Like it’s one thing, for some one your old team to reach out with questions or asking you for context on something. It’s another entirely for you to be working on key deliverables or attending meetings for your old team after you start date at the new position. Remember to be respectful of all sides, and I will say that for this to work, you need to respect this tip as well.

More than likely you gave a certain amount of notice, and I’ve seen way too many people take those 2 weeks as a “I’m just going to coast out and then pick up my new position.” This is absolutely the wrong behavior. You have a responsibility to your old team to do as much as you can in the two weeks to set them up for success in the transition. For example, doing the following actions:

  • Document everything you can about your current projects, things like:
    • State they are in
    • Why certain decisions were made
    • Key team members who were part of the conversations
    • where to find certain things
    • key concepts and processes
  • Setup meetings with your backfills early, and make sure they have time to consume everything and understand what is going on.
  • Communicate with team members and others as much as you can to make sure nothing gets dropped.
  • Document any processes you do to help them pick them up.

Tip #2 – Make sure you have a meeting with your new manager day 1

This one feels kind of obvious, but again, some people miss it. You really need to make sure you have a meeting with your new manager as soon as possible. This is going to be to set the foundation of your relationship with them.

You need to make sure you have this meeting as this is the time to make sure you focus on the following key areas:

  • Expectations on communication
  • Expectations on delivery and focus
  • Key objectives and things you can help with
  • How to engage with the team
  • What is their preference means of communication?
  • What are their working hours?
  • What is a reasonable response time?
  • What kind of regular cadence should you set up? And who defines the agenda?

Tip #3 – Make sure you talk about team culture

The above meeting with your manager is a great time to have this conversation. And this is one of those things everyone over looks, but is probably the most important. In a pre-COVID world, you would discover the nature of the team naturally. You would talk to your team mates, and find out how they work. And let’s be honest, some of these things could be taken for granted. But with the advent of remote work, all of the old assumptions are out of the window. So you need to take the time to figure out what the culture of Thea team is, and by that I. Mean things like this:

  • Are there regular cadence calls?
  • Do people prefer phone or face to face?
  • Is there a preference for camera use?

Tip #4 – Ask for an “onboarding buddy”

This is extremely important, Onboarding in a vacuum is completely impossible. And even the best manager in the world isn’t going to be able to talk about everything. And to be honest, even if they want to, team dynamics are always going to be different.

Let’s face it, people behave differently for their managers than they do for their other team members. Plus, most managers I’ve known are very busy people, and the demands of onboarding someone can be problematic.

So a good solution is to ask your manager for an “onboarding buddy,” and the idea is this is someone who you can check in with on a daily basis. The goal being that you can check in with on a daily basis to make sure things are going well and that you are aligning properly.

Tip #5 – Make sure you have explicit direction

I find too often, and I’m including myself in this, I am afraid to step back and say, I’m not entirely clear on what you are looking for. You don’t want to find yourself in a situation where you don’t have an understanding of your direction and next steps. Make sure you get explicit instructions from your new manager on what you’re priorities should be and what you are going to be working on.

Tip #6 – Make sure you are able to contribute early

Look for ways to dive in, I hate to say this, but most onboarding is fairly generic, and the best way to get to know the team is to roll up your sleeves and get to work. Find ways that you can help them with what they are working on right now.

Don’t be pushy about it but just ask “How can I help?” Or look for things you feel comfortable taking on early. The best way to get to know a team is by getting in a foxhole with them and showing value.

Tip #7 – Start in listening mode

One of the biggest mistakes, I see over and over again is that people show up to a team and start saying “we should do this differently?” It is pretty presumptuous to walk in to a team that has been working together and start with a “You’re doing it wrong.” It also causes you to miss out on a lot of learning and would recommend you take the time to really make sure you understand how the team works, and the “Why?” before you start throwing out ideas for changes.

So there are some of my initial thoughts on how to join a team remotely.