July 14, 2024

Everything you need to know about uploading a Blob

In this article, we will go together through creation of a Storage Account and uploading your first blob from code! Slowly, step-by-step, as beginners do!

First thighs first we start by making sure you have all the basic theories behind the blob storage. You can find them in my previous post here.

Create a new Azure Storage Account and Container

There are two ways of creating Storage Account and container. First one is by clicking through in Azure Portal, second using Azure CLI. We will aim for a second option.

First make sure you have the Azure CLI installed. You can check if it’s installed by running az --version on your local machine. If nothing is found follow the instructions here to install it on your OS.

az --version

Next you can move on and create a new resource group.

Site Icon

NB! If you have more than one subscription connected to the account you are using to log in remember to specify under which one you want the resources being created.

az group create --name gosia-resource-group --location europewest

After resource group is created, you can now create a Storage Account.

Site Icon

NB! Account name can be between 3-24 signs only lower case letters ane numbers are allowed. It also has to be unique across entire Portal.

az storage account create -n mystorageaccount -g gosia-resource-group -l europewest --sku Standard_LRS

You can see, that we missed some of the flags, f.ex. –access-tier. It’s being set to hot as default, and we want our blobs to be easy and fast to access, so this is a perfect solution for us.

Lastly we have to create a container in which we will be putting our code. You might want to do this from your code dependent on your requirements, but here we will work with only one container, so we can as well create it right away.

az storage container create -n gosiastoragecontainer123 --account-name mystorageaccount --fail-on-exist

It is also possible to identify the Account for a new Container by Account Key. You can use –account-key flag and copy paste value from portal or run a following command.

az storage account keys list --account-name mystorageaccount

Upload your first Blob

To connect to your account you need to create a client. Note that there is no actual connection made until you call an action, as these clients use HTTP.

You can authorize in 2 ways. First way is by using Connection String, second with Azure Credentials. In this article we will use connection string (setting up azure credentials will be covered in another article). You can get it from Azure Portal Storage Account -> Security + networking -> Access keys and put it securely in Key Vault f.ex.

 BlobServiceClient blobServiceClient = new BlobServiceClient(storageConnectionString);

Next step is getting a client for your container. It can be done via Service Client instantiated earlier.

BlobContainerClient blobContainerClient = blobServiceClient.GetBlobContainerClient(containerName);

Finally, getting a client for your desired blob.

BlobClient blobClient = blobContainerClient.GetBlobClient(imageName);

Now we can make actions on our blob! Fist let’s assume we have some locally stored pictures, which names are known by the caller and passed as a parameter. We can reach them with following piece of code:

FileStream fileStream = System.IO.File.OpenRead($"<PATH>/{imageName}");
        await blobClient.UploadAsync(fileStream, true);
fileStream.Close();

Pro tip – make sure, that your <PATH> is 100% correct.

Having this all setup your code should look more or less like this:

    public async Task UploadToBlob(string imageName)
    {
        
        string storageConnectionString = "";
        BlobServiceClient blobServiceClient = new BlobServiceClient(storageConnectionString);

        var containerName = "";
        BlobContainerClient blobContainerClient = blobServiceClient.GetBlobContainerClient(containerName);

        BlobClient blobClient = blobContainerClient.GetBlobClient(imageName);
        
        FileStream fileStream = System.IO.File.OpenRead($"<PATH>{imageName}");
        await blobClient.UploadAsync(fileStream, true);
        fileStream.Close();
    }
Site Icon

NB! I cannot stress enough, that in any of your production code you should NOT keep Connection Strings hardcoded in the code. This is just an example piece, to demonstrate whole other issue and maybe one day we will look into the refactoring together! My personal recommended approach would be creating a singleton in your Program.cs file (Startup if you still have those) and then use AzureDefaultCredentials()

Let’s call it from a controller and see what we get!

And now we can see perdro.jpeg being available in our blob!

But honestly, this is no real-life scenario. This is a great base, but surely we can come up with something even better. We can create a small web application, and try to do the same from client side!

Web app

Let’s quickly create a simple WebApp with React. It’s code can look something like below

import React, {ChangeEvent, useRef, useState} from 'react';
import './App.css';
import axios from "axios";

function App() {
    const [file, setFile] = useState<File | null>();

    const handleFileChange = (e: ChangeEvent<HTMLInputElement>) => {
        if (e.target.files) {
            setFile(e.target.files[0]);
        }
    };

    const handleUpload = () => {
        if(!file) {
            return;
            //handle error
        }
        const formData = new FormData();
        formData.append("file", file);
        axios.post("/api/blob", formData);
        setFile(null);
    };

  return (
    <div className="App">
      <header className="App-header">
        <input type="file" onChange={handleFileChange}/>
        <div>{file && `${file.name} - ${file.type}`}</div>
        <button disabled={!file} onClick={handleUpload}>Upload</button>
      </header>
    </div>
  );
}

export default App;

Nothing special, really, but there are some things, that require our attention.

To start with, in axios.post you can see, that I did not provide a full URL. That is due to CORS policy. My local solution for that is adding proxy settings in package.json file, that looks like this

"proxy": "http://localhost:5297"

Next thing is that we are sending FormData object in request body. This apparently is the recommended type to use when uploading single or multiple files to your RESTful API.

Fixed API

Now we can take a look into the adjustments done in the backend.

using System.Reflection.Metadata;
using Azure.Storage.Blobs;
using Microsoft.AspNetCore.Authorization;
using Microsoft.AspNetCore.Mvc;

namespace az204testsApi.controllers;

[ApiController]
[Route("api/[controller]/")]
public class BlobController : ControllerBase
{
    [HttpPost]
    public async Task UploadFile([FromForm] IFormFile file)
    {
       await UploadToBlob(file);
    }
    
    
    private async Task UploadToBlob(IFormFile file)
    {
        
        string storageConnectionString = "";
        BlobServiceClient blobServiceClient = new BlobServiceClient(storageConnectionString);

        var containerName = "gosiablob";
        BlobContainerClient blobContainerClient = blobServiceClient.GetBlobContainerClient(containerName);

        BlobClient blobClient = blobContainerClient.GetBlobClient(file.FileName);

        await using (Stream? data = file.OpenReadStream())
        {
            // Upload the file async
            await blobClient.UploadAsync(data);
        }
    }
}

Firstly we have changed the parameter type, instead of string we now expect an object of type IFormFile. Second change is dynamical changing of Blob name. Now it is dynamically assigned from parameter file.FileName. Lastly and most importantly we have to read the value from our parameter and this is done with OpenReadStream method available in FormFile type.

Now we can admire our simple app actually working:

Please remember, that this is just a dummy code, that requires refactoring. In your professional codebase you probably don’t want the logic inside controller as a private method, but rather in a service. ConnectionString should be better protected and stored in KeyVault. This all is out of scope for this article, but maybe at some point we’ll look into refactoring and fix all those things together? 🙂