When running services locally in Docker that need to authenticate with Azure Service Bus, you can leverage the Azure CLI (az login) to provide credentials. This avoids manually setting up connection strings or managed identities for local development.

Step 1: Authenticate Using Azure CLI

Run the following command to authenticate and persist credentials inside a volume:

docker run -it --rm -v "$PWD/azure-auth:/root/.azure" mcr.microsoft.com/azure-cli az login

This will:

  • Prompt you to log in to Azure.
  • Store authentication tokens in a directory called azure-auth inside your project.
  • Remove the container after login (--rm ensures it doesn’t persist).

Step 2: Mount the Credentials in Your Service Container

Modify your docker-compose.yml to mount this volume in the service that needs Azure authentication:

services:
  my-service:
    image: my-service-image
    volumes:
      - "./azure-auth:/root/.azure"

This ensures that when your service runs, it has access to the Azure credentials stored in azure-auth.

Step 3: Verify Authentication

Inside the running container, you can verify that authentication works by running:

az account show

If configured correctly, this should display your logged-in Azure account details.

This setup allows your Docker container to seamlessly authenticate with Azure Service Bus while keeping credentials isolated and manageable.

Step 4: Use in C# to Connect to Azure Service Bus

Once authentication is set up, you can use the Azure SDK for .NET to connect to Azure Service Bus in your C# application. Ensure your application runs inside the container with the mounted credentials.

Install the required NuGet package if you haven't already:

Then, in your C# application, use DefaultAzureCredential to authenticate and connect to Service Bus:

This setup allows your C# application to authenticate with Azure Service Bus using the credentials stored in the mounted azure-auth directory. 🚀

The Problem: Understanding AddHttpClient and Its Lifecycle

In .NET, HttpClient is typically registered using the AddHttpClient extension method in IServiceCollection. This method registers an IHttpClientFactory, which manages the lifecycle of HttpClient instances.

When you request an HttpClient instance from the factory, it doesn't create a completely new client. Instead, it reuses an HttpMessageHandler from an internal pool to improve performance and reduce socket exhaustion. The HttpMessageHandler is what actually manages the underlying HTTP connections, and it is only recreated after a defined lifetime (default is 2 minutes).

How the Underlying HttpMessageHandler Works

The HttpMessageHandler is responsible for:

  • Managing the actual network connection to the server
  • Handling cookies, authentication headers, and connection pooling
  • Keeping sockets open for reuse to improve performance

Because HttpClient instances created by the factory reuse the same HttpMessageHandler (until it is recycled), any stateful aspects of the handler—such as cookies—can persist across requests, even for different HttpClient instances.

What This Means: Cookies Can Be Shared Across Calls

Since HttpMessageHandler persists across HttpClient instances, any cookies set by a server during a request can be shared between different calls, even if they come from separate HttpClient instances.

This can lead to unintended behavior, such as:

  • User sessions being unintentionally shared across API calls
  • Authentication issues where a previous session is reused unexpectedly
  • Unexpected behavior in multi-tenant applications where requests for different tenants could carry over session state

How to Fix It: Explicitly Disable Cookies

To prevent HttpMessageHandler from storing and reusing cookies, you must explicitly disable cookie handling. This can be done by configuring HttpClientHandler when setting up the HttpClientFactory:

services.AddHttpClient("NoCookiesClient")
    .ConfigurePrimaryHttpMessageHandler(() =>
    {
        return new HttpClientHandler
        {
            UseCookies = false
        };
    });

Explanation:

  • ConfigurePrimaryHttpMessageHandler allows customization of the HttpMessageHandler used by HttpClient.
  • Setting UseCookies = false ensures that cookies are not stored or reused between requests.

With this configuration, each request will behave as if it has no prior session, preventing cookie leakage across different requests.

Conclusion

While HttpClientFactory improves performance by reusing HttpMessageHandler, it can introduce issues with shared state, especially cookies. By explicitly disabling cookies in HttpClientHandler, you can ensure each request is isolated and prevent unintended session sharing.

When working with Azure Blob Storage, you might encounter issues due to case sensitivity in blob names. If your application expects all blob names to be lowercase but some are stored in mixed or uppercase, renaming them manually can be tedious. This Bash script automates the process, ensuring all blob names are converted to lowercase using azcopy.

How It Works

  1. Lists All Blobs: The script retrieves all blobs from a specified container using azcopy list.
  2. Converts Names to Lowercase: It iterates through the blobs and converts each name to lowercase.
  3. Copies & Renames: If the name has changed, it copies the blob to a new lowercase-named blob.
  4. Deletes the Original Blob: After successfully copying, it deletes the original blob.

Prerequisites

  • Install and configure azcopy.
  • Set your storage account name, container name, and SAS token in the script.

The Script

#!/bin/bash

STORAGE_ACCOUNT_NAME=
CONTAINER_NAME=
SAS_TOKEN=

# Set the base URL for the storage account
STORAGE_BASE_URL="https://${STORAGE_ACCOUNT_NAME}.blob.core.windows.net/${CONTAINER_NAME}"

# List all blobs in the specified container
BLOBS=$(azcopy list "${STORAGE_BASE_URL}?${SAS_TOKEN}" --output-type text | grep -oP '.*(?=\s{2})')

# Iterate through each blob in the list
for BLOB_NAME in $BLOBS; do
  # Convert the blob name to lowercase
  LOWERCASE_BLOB_NAME=$(echo "$BLOB_NAME" | tr '[:upper:]' '[:lower:]')

  # Check if the blob name has changed after converting to lowercase
  if [ "$BLOB_NAME" != "$LOWERCASE_BLOB_NAME" ]; then
    # Copy the original blob to a new blob with the lowercase name
    azcopy copy "${STORAGE_BASE_URL}/${BLOB_NAME}?${SAS_TOKEN}" "${STORAGE_BASE_URL}/${LOWERCASE_BLOB_NAME}?${SAS_TOKEN}"

    # Delete the original blob after the copy operation has completed
    azcopy rm "${STORAGE_BASE_URL}/${BLOB_NAME}?${SAS_TOKEN}"
  fi
done

Use Case

This script is particularly useful when migrating data or normalizing blob names to avoid case-sensitivity issues in applications.

🚀 Run the script and ensure your blob names stay consistent!

RubyMine WSL2 MappingsChris Child | 2021-01-21 | 1 min read| Comments

For my Ruby development I have been using Visual Studio Code and WSL 2.

I wanted to move to a more Ruby friendly IDE... RubyMine.

I used this guide from the RubyMine Documentation: https://www.jetbrains.com/help/ruby/configuring-remote-interpreters-using-wsl.html#wsl_remote

The problem I had was the mapping between the Windows file system and the Linux one - step 6 in the documentation.

The problem seemed to be that the mappings didn't work when access the UNC shares for WSL 2. For example the UNC path for this blog is

\wsl\Ubuntu\home\cchild\repos\hardcopy.dev

The way around this is to map a network drive to the UNC path and us that for the mappings in RubyMine.

I ended up doing one mapping.

U: --> /

Rubymine WSL 2 Mapping

Hope this is useful.

My Unraid flash drive has died again this week - it lasted about 3 months this time. The only way I knew it had died is that I was missing all the CSS when logging in and there were some strange PHP errors.

Now the first time, I had no backups of the flash drive at all - yes, I know!

But I did learn from my mistake and this time I have backups - but this is still a pain!

The steps to fix are as follows:

  • Create new Unraid USB using the Unraid Flash Utility which can be found on the Unraid website. Be sure to use the same version you had installed!
  • Screenshot the main array page in Unraid if you can.
  • Power down and swap the USB drives over.
  • Power on.
  • Follow the procedure to replace the flash drive found on the Unraid Wiki. This process will email you a brand new registration key.
  • Copy everything in the config folder over to the new USB drive from your backup.
  • Restart.
  • Here I had to reconfigure the whole array, so use the screenshot from before to make sure it is exactly the same as before.
  • Start the array and any services you need.
  • At this point you might want to check everything is working as expected. It was a few days before I realized one of my user scripts was not running! Oops!

I have two Pro keys, which is good because you can only replace your key every 12 months. If I only had one I might be in a pickle.