Playing with Azure Event Hub

I’ve recently been playing with the Azure Event Hub. This is basically a way of transmitting large amounts* of data between systems. In a later post, I may try and test these limits by designing some kind of game based on this.

As a quick disclaimer, it’s worth bearing in mind that I am playing with this technology, and so much of the content of this post can be found in the links at the bottom of this post – you won’t find anything original here – just a record of my findings. You may find more (and more accurate) information in those.

Event Hub Namespace

The first step, as with many Azure services, is to create a namespace:

For a healthy amount of data transference, you’ll pay around £10 per month.

Finally, we’ll create event hub within the namespace:

When you create the event hub, it asks how many partitions you need. This basically splits the message delivery; and it’s clever enough to work out, if you have 3 partitions and two listeners that one should have two slots, and one, one slot:

We’ll need an access policy so that we have permission to listen:

New Console Apps

We’ll need to create two applications: a producer and a consumer.

Let’s start with a producer. Create a new console app and add this NuGet library.

Here’s the code:

class Program
{
    private static EventHubClient eventHubClient;
    private const string EhConnectionString = "Endpoint=sb://pcm-testeventhub.servicebus.windows.net/;SharedAccessKeyName=Publisher;SharedAccessKey=key;EntityPath=pcm-eventhub1";
    private const string EhEntityPath = "pcm-eventhub1";
 
    public static async Task Main(string[] args)
    {
        EventHubsConnectionStringBuilder connectionStringBuilder = new EventHubsConnectionStringBuilder(EhConnectionString)
        {
            EntityPath = EhEntityPath
        };
 
        eventHubClient = EventHubClient.CreateFromConnectionString(connectionStringBuilder.ToString());
 
        while (true)
        {
            Console.Write("Please enter message to send: ");
            string message = Console.ReadLine();
            if (string.IsNullOrWhiteSpace(message)) break;
 
            await eventHubClient.SendAsync(new EventData(Encoding.UTF8.GetBytes(message)));
        }
 
        await eventHubClient.CloseAsync();
 
        Console.WriteLine("Press ENTER to exit.");
        Console.ReadLine();
    }
}

Consumer

Next we’ll create a consumer; so the first thing we’ll need is to grant permissions for listening:

We’ll create a second new console application with this same library and the processor library, too.

class Program
{
    private const string EhConnectionString = "Endpoint=sb://pcm-testeventhub.servicebus.windows.net/;SharedAccessKeyName=Listener;SharedAccessKey=key;EntityPath=pcm-eventhub1";
    private const string EhEntityPath = "pcm-eventhub1";
    private const string StorageContainerName = "eventhub";
    private const string StorageAccountName = "pcmeventhubstorage";
    private const string StorageAccountKey = "key";
 
    private static readonly string StorageConnectionString = string.Format("DefaultEndpointsProtocol=https;AccountName={0};AccountKey={1}", StorageAccountName, StorageAccountKey);
 
    static async Task Main(string[] args)
    {
        Console.WriteLine("Registering EventProcessor...");
 
        var eventProcessorHost = new EventProcessorHost(
            EhEntityPath,
            PartitionReceiver.DefaultConsumerGroupName,
            EhConnectionString,
            StorageConnectionString,
            StorageContainerName);
 
        // Registers the Event Processor Host and starts receiving messages
        await eventProcessorHost.RegisterEventProcessorAsync<EventsProcessor>();
 
        Console.WriteLine("Receiving. Press ENTER to stop worker.");
        Console.ReadLine();
 
        // Disposes of the Event Processor Host
        await eventProcessorHost.UnregisterEventProcessorAsync();
    }
}

class EventsProcessor : IEventProcessor
{
    public Task CloseAsync(PartitionContext context, CloseReason reason)
    {
        Console.WriteLine($"Processor Shutting Down. Partition '{context.PartitionId}', Reason: '{reason}'.");
        return Task.CompletedTask;
    }
 
    public Task OpenAsync(PartitionContext context)
    {
        Console.WriteLine($"SimpleEventProcessor initialized. Partition: '{context.PartitionId}'");
        return Task.CompletedTask;
    }
 
    public Task ProcessErrorAsync(PartitionContext context, Exception error)
    {
        Console.WriteLine($"Error on Partition: {context.PartitionId}, Error: {error.Message}");
        return Task.CompletedTask;
    }
 
    public Task ProcessEventsAsync(PartitionContext context, IEnumerable<EventData> messages)
    {
        foreach (var eventData in messages)
        {
            var data = Encoding.UTF8.GetString(eventData.Body.Array, eventData.Body.Offset, eventData.Body.Count);
            Console.WriteLine($"Message received. Partition: '{context.PartitionId}', Data: '{data}'");
        }
 
        return context.CheckpointAsync();
    }
}

As you can see, we can now transmit data through the Event Hub into client applications:

Footnotes

*Large, in terms of frequency, rather than volume – for example, transmitting a small message twice a second, rather than uploading a petabyte of data

References

https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-dotnet-standard-getstarted-send

https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-dotnet-standard-getstarted-receive-eph

What can you do with a logic app? Part three – Creating a Logic App Client

One of the things that are missing from Azure Logic apps is the ability to integrate human interaction. Microsoft do have their own version of an interactive workflow (PowerApps), which is (obviously) far better than what you can produce by following this post.

In this post, we’ll create a very basic client for a logic app. Obviously, with some thought, this could easily be extended to allow a fully functional, interactive, workflow system.

Basic Logic App

Let’s start by designing our logic app. The app in question is going to be a very simple one. It’s format is going to be that it will add a message to a logging queue (just so it has something to do), then we’ll ask the user a question; and we’ll do this by putting a message onto a topic: left or right. Based on the user’s response, we’ll either write a message to the queue saying left, or right. Let’s have a look at our Logic App design:

It’s worth pointing out a few things about this design:
1. The condition uses the expression base64ToString() to convert the encoded message into plain text.
2. Where the workflow picks up, it uses a peek-lock, and then completes the message at the end. It looks like it’s a ‘feature’ of logic apps that an automatic complete on this trigger will not actually complete the message (plus, this is actually a better design).

Queues and Topics

The “Log to message queue” action above is putting an entry into a queue; so a quick note about why we’re using a queue for logging, and a topic for the interaction with the user. In a real life version of this system, we might have many users, but they might all want to perform the same action. Let’s say that they all are part of a sales process, and the actions are actually actions along that process; adding these to a queue maintains their sequence. Here’s the queue and topic layout that I’m using for this post:

Multiple Triggers

As you can see, we actually have two triggers in this workflow. The first starts the workflow (so we’ll drop a message into the topic to start it), and the second waits for a second message to go into the topic.

To add a trigger part way through the workflow, simply add an action, search and select “Triggers”:

Because we have a trigger part way through the workflow, what we have effectively issued here is an await statement. Once a message appears in the subscription, the workflow will continue where it left off:

As soon as a message is posted, the workflow carries on:

Client Application

For the client application, we could simply use the Service Bus Explorer (in fact, the screenshots above were taken from using this to simulate messages in the topic). However, the point of this post is to create a client, and so we will… although we’ll just create a basic console app for now.

We need the client to do two things: read from a topic subscription, and write to a topic. I haven’t exactly been here before, but I will be heavily plagiarising from here, here, and here.

Let’s create a console application:

Once that’s done, we’ll need the service bus client library: Install it from here.

The code is generally quite straight-forward, and looks a lot like the code to read and write to queues. The big difference is that you don’t read from a topic, but from a subscription to a topic (a topic can have many subscriptions):

class Program
{
    
    static async Task Main(string[] args)
    {
        MessageHandler messageHandler = new MessageHandler();
        messageHandler.RegisterToRead("secondstage", "sub1");
 
        await WaitForever();
    }
 
    private static async Task WaitForever()
    {
        while (true) await Task.Delay(5000);
    }
}
public class MessageHandler
{
    private string _connectionString = "service bus connection string details";
    private ISubscriptionClient _subscriptionClient;
    public void RegisterToRead(string topicName, string subscriptionName)
    {            
        _subscriptionClient = new SubscriptionClient(_connectionString, topicName, subscriptionName);
 
        MessageHandlerOptions messageHandlerOptions = new MessageHandlerOptions(ExceptionReceived)
        {
            AutoComplete = false,
            MaxAutoRenewDuration = new TimeSpan(1, 0, 0)
        };
 
        _subscriptionClient.RegisterMessageHandler(ProcessMessage, messageHandlerOptions);
 
    }
 
    private async Task ProcessMessage(Message message, CancellationToken cancellationToken)
    {
        string messageText = Encoding.UTF8.GetString(message.Body);
 
        Console.WriteLine(messageText);
        string leftOrRight = Console.ReadLine();
 
        await _subscriptionClient.CompleteAsync(message.SystemProperties.LockToken);
 
        await SendResponse(leftOrRight, "userinput");
    }
 
    private async Task SendResponse(string leftOrRight, string topicName)
    {
        TopicClient topicClient = new TopicClient(_connectionString, topicName);
        Message message = new Message(Encoding.UTF8.GetBytes(leftOrRight));
        await topicClient.SendAsync(message);
    }
 
    private Task ExceptionReceived(ExceptionReceivedEventArgs arg)
    {
        Console.WriteLine(arg.Exception.ToString());
        return Task.CompletedTask;
    }
}

If we run it, then when the logic app reaches the second trigger, we’ll get a message from the subscription and ask directions:

Based on the response, the logic app will execute either the right or left branch of code.

Summary

Having worked with workflow systems in the past, one recurring feature of them is that they start to get used for things that don’t fit into a workflow, resulting in a needlessly over-complex system. I imagine that Logic Apps are no exception to this rule, and in 10 years time, people will roll their eyes at how Logic Apps have been used where a simple web service would have done the whole job.

The saving grace here is source control. The workflow inside a Logic App is simply a JSON file, and so it can be source controlled, added to a CI pipeline, and all the good things that you might expect. Whether or not a more refined version of what I have described here makes any sense is another question.

There are many downsides to this approach: firstly, you are fighting against the Service Bus by asking it to wait for input (that part is a very fixable problem with a bit of an adjustment to the messages); secondly, you would presumably need some form of timeout (again, a fixable problem that will probably feature in a future post). The biggest issue here is that you are likely introducing complex conditional logic with no way to unit test; this isn’t, per se, fixable; however, you can introduce some canary logic (again, this will probably be the feature of a future post).

References

https://docs.microsoft.com/en-us/azure/logic-apps/logic-apps-limits-and-config

https://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-dotnet-how-to-use-topics-subscriptions

https://stackoverflow.com/questions/28127001/the-lock-supplied-is-invalid-either-the-lock-expired-or-the-message-has-alread

What can you do with a logic app? Part Two – Use Excel to Manage an E-mail Notification System

In this post I started a series of posts covering different scenarios that you might use an Azure Logic App, and how you might go about that. In this, the second post, we’re going to set-up an excel spreadsheet that allows you simply add a row to an excel table and have a logic app act on that row.

So, we’ll set-up a basic spreadsheet with an e-mail address, subject, text and a date we want it to send; then we’ll have the logic app send the next eligible mail in the list, and mark it as sent.

Spreadsheet

I’ll first state that I do not have an Office 365 subscription, and nothing that I do here will require one. We’ll create the spreadsheet in Office Online. Head over to One Drive (if you don’t have a one drive account then they are free) and create a new spreadsheet:

In the spreadsheet, create a new table – just enter some headers (like below) and then highlight the columns and “Insert Table”:

Remember to check “My Table Has Headers”.

Now enter some data:

Create the Logic App

In this post I showed how you can use Visual Studio to create and deploy a logic app; we’ll do that here:

Once we’ve created the logic app, we’ll need to select to create an action that will get the Excel file that we created; in this case “List rows present in a table”:

This also requires that we specify the table (if you’re using the free online version of Excel then you’ll have to live with the table name you’re given):

Loop

This retrieves a list of rows, and so the next step is to iterate through them one-by-one. We’ll use a For-Each:

Conditions

Okay, so we’re now looking at every row in the table, but we don’t want every row in the table, we only want the ones that have not already been sent, and the ones that are due to be sent (so the date is either today, or earlier). We can use a conditional statement for this:

But we have two problems:

  • Azure Logic Apps are very bad at handling dates – that is to say, they don’t
  • There is currently no way in an Azure Logic App to update an Excel spreadsheet row (you can add and delete only)

The former is easily solved, and the way I elected to solve the latter is to simply delete the row instead of updating it. It is possible to simply delete the current row, and add it back with new values; however, we won’t bother with that here.

Back to the date problem; what we need here is an Azure function…

Creating an Azure Function

Here is the code for our function (see here for details of how to create one):

        [FunctionName("DatesCompare")]
        public static IActionResult Run([HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)]HttpRequest req, TraceWriter log)
        {
            log.Info("C# HTTP trigger function processed a request.");

            string requestBody = new StreamReader(req.Body).ReadToEnd();
            return ParseDates(requestBody);

        }

        public static IActionResult ParseDates(string requestBody)
        {
            dynamic data = JsonConvert.DeserializeObject(requestBody);

            DateTime date1 = (DateTime)data.date1;
            DateTime date2 = DateTime.FromOADate((double)data.date2);

            int returnFlagIndicator = 0;
            if (date1 > date2)
            {
                returnFlagIndicator = 1;
            }
            else if (date1 < date2)
            {
                returnFlagIndicator = -1;
            }

            return (ActionResult)new OkObjectResult(new
            {
                returnFlag = returnFlagIndicator
            });
        }

There’s a few points to note about this code:
1. The date coming from Excel extracts as a double, which is why we need to use FromOADate.
2. The reason to split the function up is so that the main logic can be more easily unit tested. If you ever need a reason for unit testing then try to work out why an Azure function isn’t working inside a logic app!

The logic around this function looks like this:

We build up the request body with the information that we have, and then parse the output. Finally, we can check if the date is in the past and then send the e-mail:

Lastly, as we said earlier, we’ll delete the row to ensure that the e-mail is only sent once:

The eagle eyed and sane amongst you will notice that I’ve used the subject as a key. Don’t do this – it’s very bad practice!

References

https://github.com/Azure/azure-functions-host/wiki/Azure-Functions-runtime-2.0-known-issues

Short Walks – XUnit Warning

As with many of these posts – this is more of a “note to self”.

Say you have an assertion that looks something like this in your Xunit test:

Assert.True(myEnumerable.Any(a => a.MyValue == "1234"));

In later versions (not sure exactly which one this was introduced it), you’ll get the following warning:

warning xUnit2012: Do not use Enumerable.Any() to check if a value exists in a collection.

So, Xunit has a nice little feature where you can use the following syntax instead:

Assert.Contains(myEnumerable, a => a.MyValue == "1234");

Short Walks – Submit a single row of data in ReactJS

While looking into the react sample app, I came across a scenario whereby you might need to pass a specific piece of data across to an event handler. A lot of the online examples cover data state; but what happens when you have a situation such as the one in the sample app; consider this:

In this instance, you want to pass the temperature of the line you’ve selected. The solution is quite simple, and documented here:

private renderForecastsTable(forecasts: WeatherForecast[]) {
    return <table className='table'>
        <thead>
            <tr>
                <th>Date</th>
                <th>Temp. (C)</th>
                <th>Temp. (F)</th>
                <th>Summary</th>
            </tr>
        </thead>
        <tbody>
        {forecasts.map(forecast =>
            <tr key={ forecast.dateFormatted }>
                <td>{ forecast.dateFormatted }</td>
                <td>{ forecast.temperatureC }</td>
                <td>{ forecast.temperatureF }</td>
                <td>{forecast.summary}</td>
                <td><button onClick={(e) => this.handleClick(e, forecast)}>Log Temperature!</button></td>
            </tr>
        )}
        </tbody>
    </table>;
}

Here, we’re passing the entire forecast object to the handler; which looks like this:

handleClick = (event: React.FormEvent<HTMLButtonElement>, forecast: WeatherForecast) => {
    console.log("timestamp: " + event.timeStamp);
    console.log("data: " + forecast.temperatureC);
}

https://reactjs.org/docs/forms.html

https://reactjs.org/docs/handling-events.html

Adding a New Screen to the React Template Project

In this post I started looking into ReactJS. Following getting the sample project running, I decided that I’ve try adding a new screen. Since it didn’t go as smoothly as I expected, I’ve documented my adventures.

The target of this post is to create a new screen, using the sample project inside Visual Studio.

Step 1

Create a brand new project for React:

If you run this out of the box (if you can’t because of missing packages then see this article), you’ll get a screen that looks like this:

Step 2

Add a new tsx file to the components:

Here’s some code to add into this new file:

import * as React from 'react';
import { RouteComponentProps } from 'react-router';
 
 
export class NewScreen extends React.Component<RouteComponentProps<{}>, {}> {
    public render() {
        return <div>
            <h1>New Screen Test</h1>
        </div>;
    }
}
 

The Javascript as HTML above is one of the things that makes ReactJS an appealing framework. Combine that with Typescript, and you get a very XAML feel to the whole web application.

Step 3

Add a link to the Navigation Screen (NavMenu.tsx):

<div className='navbar-collapse collapse'>
    <ul className='nav navbar-nav'>
        <li>
            <NavLink to={ '/' } exact activeClassName='active'>
                <span className='glyphicon glyphicon-home'></span> Home
            </NavLink>
        </li>
        <li>
            <NavLink to={ '/counter' } activeClassName='active'>
                <span className='glyphicon glyphicon-education'></span> Counter
            </NavLink>
        </li>
        <li>
            <NavLink to={ '/fetchdata' } activeClassName='active'>
                <span className='glyphicon glyphicon-th-list'></span> Fetch data
            </NavLink>
        </li>
        <li>
            <NavLink to={'/newscreen'} activeClassName='active'>
                <span className='glyphicon glyphicon-th-list'></span> New screen
            </NavLink>
        </li>
 
    </ul>
</div>

If you run this now, you’ll see the navigation entry, but clicking on it will give you a blank screen. It is just that scenario that motivated this post!

Step 4

Finally, the routes.tsx file needs updating so that it knows which screen to load when:

import * as React from 'react';
import { Route } from 'react-router-dom';
import { Layout } from './components/Layout';
import { Home } from './components/Home';
import { FetchData } from './components/FetchData';
import { Counter } from './components/Counter';
import { NewScreen } from './components/NewScreen';
 
export const routes = <Layout>
    <Route exact path='/' component={ Home } />
    <Route path='/counter' component={ Counter } />
    <Route path='/fetchdata' component={FetchData} />
    <Route path='/newscreen' component={NewScreen} />
</Layout>;

Using NSubstitute for partial mocks

I have previously written about how to, effectively, subclass using Nsubstitute; in this post, I’ll cover how to partially mock out that class.

Before I get into the solution; what follows is a workaround to allow badly written, or legacy code to be tested without refactoring. If you’re reading this and thinking you need this solution then my suggestion would be to refactor and use some form of dependency injection. However, for various reasons, that’s not always possible (hence this post).

Here’s our class to test:

public class MyFunkyClass
{
    public virtual void MethodOne()
    {        
        throw new Exception("I do some direct DB access");
    }
 
    public virtual int MethodTwo()
    {
        throw new Exception("I do some direct DB access and return a number");

        return new Random().Next(5);
    }
 
    public virtual int MethodThree()
    {
        MethodOne();
        if (MethodTwo() <= 3)
        {
            return 1;
        }
 
        return 2;
    }
}

The problem

Okay, so let’s write our first test:

[Fact]
public void Test1()
{
    // Arrange
    MyFunkyClass myFunkyClass = new MyFunkyClass();
 
    // Act
    int result = myFunkyClass.MethodThree();
 
    // Assert
    Assert.Equal(2, result);
}

So, what’s wrong with that?

Well, we have some (simulated) DB access, so the code will error.

Not the but a solution

The first thing to do here is to mock out MethodOne(), as it has (pseudo) DB access:

[Fact]
public void Test1()
{
    // Arrange
    MyFunkyClass myFunkyClass = Substitute.ForPartsOf<MyFunkyClass>();
    myFunkyClass.When(a => a.MethodOne()).DoNotCallBase();
 
    // Act
    int result = myFunkyClass.MethodThree();
 
    // Assert
    Assert.Equal(2, result);
}

Running this test now will fail with:

Message: System.Exception : I do some direct DB access and return a number

We’re past the first hurdle. We can presumably do the same thing for MethodTwo:

[Fact]
public void Test1()
{
    // Arrange
    MyFunkyClass myFunkyClass = Substitute.ForPartsOf<MyFunkyClass>();
    myFunkyClass.When(a => a.MethodOne()).DoNotCallBase();
    myFunkyClass.When(a => a.MethodTwo()).DoNotCallBase();
 
    // Act
    int result = myFunkyClass.MethodThree();
 
    // Assert
    Assert.Equal(2, result);
}

Now when we run the code, the test still fails, but it no longer accesses the DB:

Message: Assert.Equal() Failure
Expected: 2
Actual: 1

The problem here is that, even though we don’t want MethodTwo to execute, we do want it to return a predefined result. Once we’ve told it not to call the base method, you can then tell it to return whatever we choose (there are separate events – see the bottom of this post for a more detailed explanation of why); for example:

[Fact]
public void Test1()
{
    // Arrange
    MyFunkyClass myFunkyClass = Substitute.ForPartsOf<MyFunkyClass>();
    myFunkyClass.When(a => a.MethodOne()).DoNotCallBase();
    myFunkyClass.When(a => a.MethodTwo()).DoNotCallBase();
    myFunkyClass.MethodTwo().Returns(5);
 
    // Act
    int result = myFunkyClass.MethodThree();
 
    // Assert
    Assert.Equal(2, result);
}

And now the test passes.

TLDR – What is this actually doing?

To understand this better; we could do this entire process manually. Only when you’ve felt the pain of a manual mock, can you really see what mocking frameworks such as NSubtitute are doing for us.

Let’s assume that we don’t have a mocking framework at all, but that we still want to test MethodThree() above. One approach that we could take is to subclass MyFunkyClass, and then test that subclass:

Here’s what that might look like:

class MyFunkyClassTest : MyFunkyClass
{
    public override void MethodOne()
    {
        //base.MethodOne();
    }
 
    public override int MethodTwo()
    {
        //return base.MethodTwo();
        return 5;
    }
}

As you can see, now that we’ve subclassed MyFunkyClass, we can override the behaviour of the relevant virtual methods.

In the case of MethodOne, we’ve effectively issued a DoNotCallBase(), (by not calling base!).

For MethodTwo, we’ve issued a DoNotCallBase, and then a Returns statement.

Let’s add a new test to use this new, manual method:

[Fact]
public void Test2()
{
    // Arrange 
    MyFunkyClassTest myFunkyClassTest = new MyFunkyClassTest();
 
    // Act
    int result = myFunkyClassTest.MethodThree();
 
    // Assert
    Assert.Equal(2, result);
}

That’s much cleaner – why not always use manual mocks?

It is much cleaner if you always want MethodThree to return 5. Once you need it to return 2 then you have two choices, either you create a new mock class, or you start putting logic into your mock. The latter, if done wrongly can end up with code that is unreadable and difficult to maintain; and if done correctly will end up in a mini version of NSubstitute.

Finally, however well you write the mocks, as soon as you have more than one for a single class then every change to the class (for example, changing a method’s parameters or return type) results in a change to more than one test class.

It’s also worth mentioning again that this problem is one that has already been solved, cleanly, by dependency injection.

Forcing an NPM Restore

I’ve recently started looking into the Javascript library ReactJS. Having read a couple of tutorials and watched the start of a Pluralsight video, I did the usual and started creating a sample application. The ReactJS template in VS is definitely a good place to start; however, the first issue that I came across was with NPM.

Upon creating a new web application, I was faced with the following errors:

The reason being that, unlike NuGet, npm doesn’t seem to sort your dependencies out automatically. After playing around with it for a while, this is my advice to my future self on how to deal with such issues.

The best way for force npm to restore your packages seems to be to call

npm install

either from Powershell, or from the Package Manager Console inside VS.

Powershell

On running this, I found that, despite getting the error shown above, the packages were still restored; however, you can trash that file:

Following that, delete the node_modules directory and re-run, and there are no errors:

Package Manager Console

In Package Manager Console, ensure that you’re in the right directory (you’ll be in the solution directory by default, which is the wrong directory):

References

https://stackoverflow.com/questions/12866494/how-do-you-reinstall-an-apps-dependencies-using-npm

Short Walks – Using AppSettings.json in Asp Net Core

One of the things that is very different when you move to Asp.Net Core is the way that configuration files are treated. This partly comes from the drive to move things that are not configuration out of configuration files. It looks like the days of app.config and web.config are numbered and, in their place, we have AppSettings.Json. Here’s an example of what that new file might look like:

{
  "Logging": {
    "LogLevel": {
      "Default": "Warning"
    }
  },
  "AzureAppSettings": {
    "ApplicationInsightsKey": "1827374d-1d50-428d-92a1-c65fv2d73272"
  }
 
}
 

The old files were very flat and, using the configuration manager, you could simply read a setting; something like this:

var appSettings = ConfigurationManager.AppSettings;
string result = appSettings[key];

So, the first question is: can you still do this? The answer is, pretty much, yes:

public void ConfigureServices(IServiceCollection services)
{            
    IConfigurationBuilder builder = new ConfigurationBuilder()
          .SetBasePath(Directory.GetCurrentDirectory())
          .AddJsonFile("appsettings.json");
    Configuration = builder.Build();

    Configuration["AzureAppSettings:ApplicationInsightsKey"]

However, you now have the option of creating a class to represent your settings; something like:

AzureAppSettings azureAppSettings = new AzureAppSettings();
Configuration.GetSection("AzureAppSettings").Bind(azureAppSettings);

If you use this approach then you’ll need an extension library from NuGet:

Install-Package Microsoft.Extensions.Configuration.Binder

Is it better, or worse?

At first glance, it would appear that things have gotten worse; or at least, more complex. However, the previous method had one massive problem: it was a static class. The result being that most people have written their own wrapper around the ConfigurationManager class. We now have a class that can be injected out of the box; alternatively, you can split your configuration up into classes, and pass the classes around; the more I think about this, the better I like it: it makes more sense to have a class or method accept parameters that are necessary for its execution and, arguably, breaks the single responsibility principle if you’re faffing around trying to work out if you have all the operating parameters.

The other advantage here is that the configuration file can now be hierarchical. If you have well designed, small pieces of software then this might not seem like much of an advantage, but if you have 150 settings in your web.config, it makes all the difference.