Add Application Insights to an Azure Resource

Application Insights provides a set of metric tools to analyse the performance and behaviour of various Azure services. For example, you can see how many calls you have to your Azure Web site, or you can see how many errors your service has generated.

This post is concerned with the scenario where you would want to manually log to application insights. The idea being that, in addition to the above metrics, you can output specific log messages in a central location. You might just want to log some debug information (“Code reached here”, “now here” – don’t try and say you’ve never been there!) Or you might find that there is a particular event in your program that you want to track, or maybe you’ve got two different resources, and you’re trying to work out how quick or frequent the communication between them is.

Set-up

The first step is to set-up a new App Insights service in the Azure Portal (you can also use the recently released Azure Portal App).

Select to create a new resource, and pick Application Insights:

When you create the resource, you’ll be asked for some basic details (try to keep the location in the same region as the app(s) you’ll be monitoring):

The instrumentation key is shown in the overview, and you will need this later:

You should be able to see things like failed requests, response time, etc. However, we’ve just configured this, so it’ll be quiet for now:

Check the “Search” window (which is where log entries will appear):

The other place you can see the output is “Logs (Analytics)”.

Create Web Job

The next thing we need is something to trace; let’s go back to a web job.

Once you’ve set-up your web job, app AppInsights from NuGet:

Install-Package ApplicationInsights.Helpers.WebJobs

The class that we’re principally interested here is the TelemetryClient. You’ll need to instantiate this class; there’s two ways to do this:

var config = Microsoft.ApplicationInsights.Extensibility.TelemetryConfiguration.CreateDefault();
 
var tc = new TelemetryClient(config);

This works if you link the App Insights to the resource that you’re tracking in Azure; you’ll usually to that here:

Once you’ve switched it on, you can link your resource; for example:

The other way to link them, without telling Azure that they are linked, is this:

TelemetryConfiguration.Active.InstrumentationKey = InstrumentationKey;

(You can use the instrumentation key that you noted earlier.)

Tracing

Now you’ve configured the telemetry client, let’s say you want to track an exception:

    var ai = new TelemetryClient(config);
    ai.TrackException(exception, properties);

Or you just want to trace something happening:

    var ai = new TelemetryClient(config);
    ai.TrackTrace(text);

Side note

The following code will result in a warning, telling you that it’s deprecated:

    var ai = new TelemetryClient();
    ai.TrackTrace(text);

References

http://pmichaels.net/2017/08/13/creating-basic-azure-web-job/

https://blogs.msdn.microsoft.com/devops/2015/01/07/application-insights-support-for-multiple-environments-stamps-and-app-versions/

https://docs.microsoft.com/en-us/azure/application-insights/app-insights-cloudservices

https://docs.microsoft.com/en-us/azure/application-insights/app-insights-windows-desktop

https://github.com/MicrosoftDocs/azure-docs/issues/40482

An ADR Visual Studio Tool – Part 5 – Sub Projects

Here, I started writing about my efforts to create an extension for Visual Studio that would allow a user to see all of the ADR records in their solution.

If you wish to see the code for this project, then it can be found here.

Sub Projects

In this post, I wanted to cover the concept of Sub Projects. Essentially, when you have a solution folder, scrolling through the solution projects will return top level solution folders as “Project Items”. Being folders, these don’t contain “Project Items” of their own – rather they contain Sub Projects. Let’s see how we could change our code to look at these:

        private async Task ScanProjectItems(
            ProjectItems projectItems, ProjectData projectData, string solutionDirectory)
        {
            await Microsoft.VisualStudio.Shell.ThreadHelper.JoinableTaskFactory.SwitchToMainThreadAsync();

            foreach (EnvDTE.ProjectItem pi in projectItems)
            {
                if (pi.IsKind(ProjectItemTypes.SOLUTION_FOLDER, 
                              ProjectItemTypes.PROJECT_FOLDER,
                              ProjectItemTypes.SOLUTION_ITEM))
                {
                    if (pi.ProjectItems != null)
                    {
                        await ScanProjectItems(pi.ProjectItems, projectData, solutionDirectory);
                        continue;
                    }
                    else if (pi.SubProject != null)
                    {
                        await ScanProjectItems(pi.SubProject.ProjectItems, projectData, solutionDirectory);
                        continue;
                    }                    
                }

                if (!_rulesAnalyser.IsProjectItemNameValid(pi.Name))
                {
                    continue;
                }

                string text = await pi.GetDocumentText(solutionDirectory);
                if (string.IsNullOrWhiteSpace(text)) continue;

                projectData.Items.Add(new Models.ProjectItem()
                {
                    Name = pi.Name,
                    Data = text
                });
            }
        }

Previously, we were only calling recursively where we had project items, but now we’re checking for SubProjects, and using the project items inside the sub project to recursively call the method.

Validation

The other issue that we have is that, for the solution items, we can’t get the path to the specific item. For normal projects, we would do it like this:

        private async static Task<string> GetFullPath(Properties properties)
        {
            try
            {
                await Microsoft.VisualStudio.Shell.ThreadHelper.JoinableTaskFactory.SwitchToMainThreadAsync();
                return properties?.Item("FullPath")?.Value?.ToString();
            }
            catch
            {
                return string.Empty;
            }
        }

So, what we need to do is check if we can get the text; then, if it’s blank, check if we can get it another way; then, if it’s blank… etc.. It looks like this:

            string path = await GetFullPath(projectItem.Properties);
            if (string.IsNullOrWhiteSpace(path))
            {
                path = await GetFullPath(projectItem.ContainingProject?.Properties);

                if (string.IsNullOrWhiteSpace(path))
                {
                    path = Path.Combine(solutionDirectory, projectItem.Name);
                }
                else
                {
                    path = Path.Combine(path, projectItem.Name);
                }
            }

Not very pretty, I’ll grant!

References

https://stackoverflow.com/questions/38740773/how-to-get-project-inside-of-solution-folder-in-vsix-project

https://stackoverflow.com/questions/2336818/how-do-you-get-the-current-solution-directory-from-a-vspackage

Download file from Azure storage using Javascript

.Net is an excellent framework – if you want proof of that, try to do, even very simple things, in Javascript. It feels a bit like getting out of a Tesla and travelling back in time to drive a Robin Reliant (I’ve never actually driven either of these cars, so I don’t really know if it feels like that or not!)

If you were to, for example, want to download a file from a Blob Storage container, in .Net you’re looking at about 4 lines of strongly typed code. There’s basically nothing to do, and it consistently works. If you want to do that in Javascript, there’s a Microsoft Javascript Library.

In said library, there is a function that should get a download URL for you; it’s named getUrl:

const downloadLink = blobService.getUrl(containerName, fileId, sasKey);            

If you use this (at least, when I used this), it gave me the following error:

Signature did not match

To get around this, you can build the download link manually like this:

const downloadLink = blobUri + '/' + containerName + '/' + fileId + sasKey;

Comparing the two, the former appears to escape the question mark in the SAS.

To actually download the file, you can use this:

        // https://stackoverflow.com/questions/3749231/download-file-using-javascript-jquery
        function downloadURI(uri, name) 
        {
            var link = document.createElement("a");
            link.download = name;
            link.href = uri;
            link.click();
        }

And the final download function looks like this:

        function downloadFile(sas, storageUri,
            containerName, fileId, destinationFileName) {

            var blobService = AzureStorage.Blob.createBlobServiceWithSas(storageUri, sas);
            
            const downloadLink = storageUri +'/' + containerName + '/' + fileId + sas;

            downloadURI(downloadLink, destinationFileName);
        }

An ADR Visual Studio Tool – Part 4 – Dependency Injection

Continuing with my little series on creating a visual studio extension, in this post, I’ll talk about how to add dependency injection to your project.

If you’d like to see the whole solution for this, it can be found here.

Unity

In this post on Azure Functions, I talked about using Unity as an IoC container, in a place where an IoC container might not necessarily fit; whilst this is no longer true for Azure functions, it does appear to be for extensions – I presume because they don’t expect you to have one big enough to warrant IoC; also, even with DI, testing is very difficult, because most of what you’re doing, you’re doing to Visual Studio.

Let’s start by installing Unity in our project:

Install-Package Unity

Rules Analyser

In our case, we were analysing the project and extracting files; however, we were extracting all files; as a result, a check needed to be made to extract only markdown files. Consequently, I created a RulesAnalyser class:

    public class RulesAnalyser : IRulesAnalyser
    {
        public bool IsProjectItemNameValid(string projectItemName) =>
            projectItemName.EndsWith("md");        
    }

We could (and I did initially) instantiate that directly in the ViewModel, but that feels quite dirty.

AdrPackage

The *Package file for the extension seems to be the entry point, so we can add the unity container to here:

    public sealed class AdrPackage : AsyncPackage
    {        
        public static Lazy<IUnityContainer> UnityContainer =
         new Lazy<IUnityContainer>(() =>
         {
             IUnityContainer container = InitialiseUnityContainer();
             return container;
         });

        private static IUnityContainer InitialiseUnityContainer()
        {
            UnityContainer container = new UnityContainer();
            container.RegisterType<IRulesAnalyser, RulesAnalyser>();
            container.RegisterType<ISolutionAnalyser, SolutionAnalyser>();
            return container;
        }

        . . .

View Model

The next thing we need to do is to inject our dependencies.

        public AdrControlViewModel() 
            : this(AdrPackage.UnityContainer.Value.Resolve<IRulesAnalyser>(),
                  AdrPackage.UnityContainer.Value.Resolve<ISolutionAnalyser>())
        {}

        public AdrControlViewModel(IRulesAnalyser rulesAnalyser, ISolutionAnalyser solutionAnalyser)
        {            
            _rulesAnalyser = rulesAnalyser;
            _solutionAnalyser = solutionAnalyser;

            Scan = new RelayCommandAsync<object>(ScanCommand);
        }

And that’s it, we now have a working DI model in our project.

References

https://stackoverflow.com/questions/2875429/iunitycontainer-resolvet-throws-error-claiming-it-cannot-be-used-with-type-par

https://www.pmichaels.net/2018/02/04/using-unity-azure-functions/

I wrote a book – here’s what I learned

I’ve recently had a book published; if you’re interested in seeing what it is, you can find it here.

This post is simply a summary of things that I wished I’d known before I started writing. As a quick disclaimer, please don’t treat this article as legal or financial advice. It’s simply a list of things that I, personally, have encountered; if you’re in a similar situation, I would encourage you to seek advice in the same way that I have, but hopefully this article will give you a starting point.

I’ll also mention that I’m in the UK, so some of the points here may be specific to location (certainly the figures that I’m giving are in sterling, and rough approximations).

Choosing a Topic

Did I mention this was all subjective?

In my case, the topic was chosen for me; however, I would suggest that, where you have a choice, you pick something that isn’t time sensitive.

Let me try to illustrate what I mean: when I started writing the above book, .Net Core 3 was available only in preview, so I’d write a chapter, test the software, and then Microsoft would release a breaking change! I can’t describe in words how frustrating that is – you spend, maybe two or three weeks preparing and writing a chapter, and then you have to re-write it, almost from scratch!

Further, .Net Core 3 had a release date: obviously, once it’s released, there’s a sweet spot, where people want to read about the new tech – I can’t help but think that the new release cadence by Microsoft, whilst helpful for those using the tech, is just too fast for anyone to write any kind of lasting documentation.

Finally, think of the books that you remember in the industry: Domain Driven Design,
Design patterns : elements of reusable object-oriented software
,
Test Driven Development: By Example
; these are all books that are not technology specific – The GOF book was released in 1994, but it’s still relevant today!

Legal

Most of this section will discuss contracts.

The Contract Itself

You may want to consult a solicitor to get professional advice on this. When I asked around, I was quoted around £200 – £300 for a solicitor to review the contract and offer me advice. However, there is another option: The Society of Authors offers contract reviews as part of the membership fee (which was around £100 at the time of writing).

Insurance

If you get a book contract, the contract itself will say that you, personally, are liable for anything: if they get sued, you’re liable. There’s essentially three approaches to address this:

1. Professional Indemnity Insurance

This is a private insurance policy that you take out, against being sued. It is not cheap, and you should account for this expense when you decide to embark on writing a book. My impression is that you are very unlikely to actually be sued: the book itself will have a disclaimer against any accidental damage; plus, providing you don’t simply copy and paste blog posts and put them in your book without attribution (that is, your work must be original, or have the permission of the author to replicate it), you’re unlikely to fall foul of any copywrite issues.

2. A Limited Company

I’ve done a lot of research into this and, to be honest, I’m still not completely sure about it. A limited company has limited liability, so if you were to be sued, providing that you signed the contract on behalf of the company, you, personally, should be safe. I have, however, seen (and received) advice saying that the director of the limited company may bear some personal liability for any losses to the company.

Additionally, setting up a limited company is not a cheap option: although setting up the company itself only costs ~£10 (in the UK), you must submit company accounts – you can actually go to prison if you get that wrong – so you may end up forking out for an accountant (budget between £500 – £1000 / year for that!)

3. Do Nothing

I strongly suspect this is what most people do. Technically it does leave you personally liable.

Financial

Once you get offered a book contract, you will be given an advance. In addition, you will be entitled to royalties of any book sales. Here’s how that works:

Imagine your advance is £1000, and you get 10% from book sales. Let’s say the book is consistently £50 retail.

After the first 200 books have sold, your royalties will reach your advance – meaning that you will start to receive some money per sale.

If the book sells 150 copies, then you still receive the advance, but no further money.

Remember as well that any money that you earn needs to be declared to HMRC – so you’ll need to request a self-assessment.

Employment

If you have a full time job, it’s worth bearing in mind that writing and publishing a book is probably in breach of your contract; consequently, you’ll have to speak to your employer before you sign any contract.

Editing

My process for editing my blog posts (and any fiction that I write – some of which can be seen here) is that I write them in One Note, or Word, and then I transfer them to the WordPress site, come back in about an hour and read the preview – if it looks good to me, then it goes out.

The editing process I encountered with a publisher was, obviously, different. There seems to be a formula to the layout that they require. For example, if you have a series of actions, they like them to be in numbered steps. I assume to a greater or lesser extent, every publisher does essentially the same thing.

It’s also worth bearing in mind that the editing process might change some of your text without your consent – you need to be aware this is a possibility. It’s also a very likely possibility that, some months after you’ve finished on a chapter, you’ll be asked to revisit it. On occasion, I found myself following my own chapter to try and remember some of the material – which I suppose is a good thing: like when you go searching for something on the internet, and you come across your own post!

Promotion

One way or another, you’re going to have to take an interest in promoting your work. If you’re a fiction writer, that means book signings, etc. However, if you publish a tech book, that means talks, podcasts, and blog posts (such as this)! While I have done a flash talk on (essentially this blog post), I would probably advise against giving a talk on “My Book”, rather, pick a subject, and simply mention that you have also written a book (if the subject you chose is in the book then that’s probably better, but not essential!)

Summary

If you do decide to embark on writing a book, it is hugely rewarding, you learn a lot of things, and make new contacts. However, be prepared for some late nights and lost weekends. Also, don’t be under the impression that you can make any money out of this: you’re very likely to be out of pocket by the time you finish.

An ADR Visual Studio Tool – Part 3 – Listing and Reading the files

In this post, I refactored a VS Extension Plug-in, that I originally started here.

We’ll get the plug-in to list the items found in the projects, and read the contents of the files. The source code for this can be found here. I won’t be listing all the source code in this article (most of it is just simple WPF and View Model binding).

To look through all the projects and folders in the solution, we’ll need to recursively scan all the files, and then read them; let’s have a look at what such a method might look like:

        private async Task ScanProjectItems(ProjectItems projectItems, ProjectData projectData)
        {
            await Microsoft.VisualStudio.Shell.ThreadHelper.JoinableTaskFactory.SwitchToMainThreadAsync();

            foreach (EnvDTE.ProjectItem pi in projectItems)
            {
                if (pi.IsKind(ProjectItemTypes.SOLUTION_FOLDER, 
                              ProjectItemTypes.PROJECT_FOLDER,
                              ProjectItemTypes.SOLUTION_ITEM)
                    && pi.ProjectItems != null)
                {                    
                    await ScanProjectItems(pi.ProjectItems, projectData);
                    return;
                }

                string text = await GetDocumentText(pi);
                if (string.IsNullOrWhiteSpace(text)) continue;

                projectData.Items.Add(new Models.ProjectItem()
                {
                    Name = pi.Name,
                    Data = text
                });
            }
        }

I wanted to look specifically into two aspects of this method: IsKind() and GetDocumentText(). None of the rest of this is particularly exciting.

Kind of File

In a VS Extension, you can read ProjectItems – they represent pretty much anything in the solution, and so it’s necessary to be able to find out exactly what the type is. As you can see above, I have an extension method, which was taken from here. Let’s have a quick look at the file that defines the ProjectItemTypes:

    public static class ProjectItemTypes
    {
        public const string MISC = "{66A2671D-8FB5-11D2-AA7E-00C04F688DDE}";
        public const string SOLUTION_FOLDER = "{66A26720-8FB5-11D2-AA7E-00C04F688DDE}";
        public const string SOLUTION_ITEM = "{66A26722-8FB5-11D2-AA7E-00C04F688DDE}";                                            
        public const string PROJECT_FOLDER = "{6BB5F8EF-4483-11D3-8BCF-00C04F8EC28C}";        
    }

I’m sure there’s a better way, but after I realised what Mads was doing in the above linked project, I just stuck a breakpoint in the code, and copied the “Kind” guid from there! The IsKind method is taken from the same codebase:

        public static bool IsKind(this ProjectItem projectItem, params string[] kindGuids)
        {
            Microsoft.VisualStudio.Shell.ThreadHelper.ThrowIfNotOnUIThread();

            foreach (var guid in kindGuids)
            {
                if (projectItem.Kind.Equals(guid, StringComparison.OrdinalIgnoreCase))
                    return true;
            }

            return false;
        }

As you can see, it’s almost not worth mentioning – except that the extensions are very particular about running in the UI thread, so you’ll find ThrowIfNotOnUIThread scattered around your code like confetti!

Reading File Contents

If you need to access the file contents in an extension, one way is to convert the project item document to a TextDocument, and then use Edit Points:

        public static async Task<string> GetDocumentText(this ProjectItem projectItem)
        {
            if (projectItem == null) return string.Empty;
            await Microsoft.VisualStudio.Shell.ThreadHelper.JoinableTaskFactory.SwitchToMainThreadAsync();

            try
            {
                TextDocument textDocument;
                if (!projectItem.IsOpen)
                {
                    var doc = projectItem.Open(EnvDTE.Constants.vsViewKindCode);
                    textDocument = (TextDocument)doc.Document.Object("TextDocument");
                }
                else
                {
                    textDocument = (TextDocument)projectItem.Document.Object("TextDocument");
                }
                EditPoint editPoint = textDocument.StartPoint.CreateEditPoint();
                return editPoint.GetText(textDocument.EndPoint);
            }
            catch (Exception)
            {
                return string.Empty;
            }
        }

Edit Point are much more powerful that this, they allow you to change the text in a document; for example, imagine your extension needed to change every local pascal cased variable into one with an underscore (myVariable to _myVariable), you may choose to use edit points there.

References

https://www.csharpcodi.com/csharp-examples/EnvDTE.Document.Object(string)/

https://github.com/madskristensen/MarkdownEditor/

An ADR Visual Studio Tool – Part 2 – Refactoring

A short while ago, I wrote an article about how to create a new extension for Visual Studio. The end target of this is to have a tool that will allow easy management of ADR Records.

In this post, I’m going to clean up some of the code in that initial sample. There’s nothing new here, just some basic WPF good practices. If you’re interested in downloading, or just seeing the code, it’s here.

What’s wrong with what was there?

The extension worked (at least as far as it went), but it used code behind to execute the functionality. This means that the logic and the UI are tightly coupled. My guess is that soon (maybe as part of .Net 5) the extensions will move over to another front end tech (i.e. not WPF), which means that people that have written extensions may need to re-write them. This is a guess – I don’t know any more than you do.

Onto the refactoring… Starting with MVVM basics

Let’s start with a simple View Model; previously, the code was in the code behind, so we’ll move that all over to a view model:

    public class AdrControlViewModel : INotifyPropertyChanged
    {
        public AdrControlViewModel()
        {
            Scan = new RelayCommandAsync<object>(ScanCommand);
        }

        private string _summary;
        public string Summary 
        { 
            get => _summary; 
            set => UpdateField(ref _summary, value); 
        }

        public RelayCommandAsync<object> Scan { get; set; }

        private async Task ScanCommand(object arg)
        {
            var solutionAnalyser = new SolutionAnalyser();
            Summary = await solutionAnalyser.ScanSolution();            
        }
    }

You’ll also need the following INotifyPropertyChanged boilerplate code:

        #region INotifyPropertyChanged
        public event PropertyChangedEventHandler PropertyChanged;

        protected void OnPropertyChanged([CallerMemberName]string fieldName = null) =>        
            PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(fieldName));

        private void UpdateField<T>(ref T field, T value, [CallerMemberName]string fieldName = null)
        {
            field = value;
            OnPropertyChanged(fieldName);
        }
        #endregion

One day, this can go into a base class, if we ever create a second View Model. We’ll come back to SolutionAnalyser in a while. I shamelessly pilfered the RelayCommand code from here. Finally, I did a bit of shuffling around:

Finally, the code behind needs to be changed as follows:

    public partial class AdrControl : UserControl
    {
        /// <summary>
        /// Initializes a new instance of the <see cref="AdrControl"/> class.
        /// </summary>
        public AdrControl()
        {
            this.InitializeComponent();
            DataContext = new AdrControlViewModel();
        }   
    }

SolutionAnalyser

This is, essentially, the only real code that actually does anything. It’s likely to be severely refactored in a later incarnation, but for now, it’s just in its own class:

    public class SolutionAnalyser
    {
        internal async Task<string> ScanSolution()
        {
            try
            {
                await ThreadHelper.JoinableTaskFactory.SwitchToMainThreadAsync();
                var dte = (DTE)Package.GetGlobalService(typeof(DTE));

                var sln = Microsoft.Build.Construction.SolutionFile.Parse(dte.Solution.FullName);
                string summaryText = $"{sln.ProjectsInOrder.Count.ToString()} projects";

                foreach (Project p in dte.Solution.Projects)
                {
                    summaryText += $"{Environment.NewLine} {p.Name} {p.ProjectItems.Count}";
                }
                return summaryText;
            }
            catch
            {
                return "Solution is not ready yet.";
            }            
        }
    }

What’s next?

The next stage is to introduce a search and create facility. I’m going to start creating some issues in the GitHub repo when I get some time.

Fault Resilience in Web Calls Using Polly

Some time ago, I was working on a project using WCF RPC calls. We discovered that of the calls that we made, around 10 – 20% were failing. Once we discovered this, I implemented a retry policy to try and cope with the flaky network. I didn’t realise at the time (or maybe it didn’t exist at the time) that there’s a library out there that does that for you: Polly.

To test this out, let’s create a bog standard API Web Project; from Powershell create a directory; e.g.:

mkdir PollyDemo

Then create a Web API project:

dotnet new webapi -n UnreliableApi

Let’s add a client project console app:

    static async Task Main(string[] args)
    {

        HttpClient httpClient = new HttpClient()
        {
            BaseAddress = new Uri("https://localhost:44368")
        };

        var result = await httpClient.GetAsync("weatherforecast");
        if (result.IsSuccessStatusCode)
        {
            var output = await result.Content.ReadAsStringAsync();
            Console.WriteLine(output);
        }
    }

This should call the API fine (using the default API project you get a weather forecaster). Let’s now introduce a level of entropy into the system. In the server, let’s add the following code:

        static Random _rnd = new Random();

        [HttpGet]
        public IEnumerable<WeatherForecast> Get()
        {
            // 1 in 3 chance of an error
            if (_rnd.Next(3) == 1)
            {
                throw new Exception("Bad stuff happened here");
            }

            var rng = new Random();
            return Enumerable.Range(1, 5).Select(index => new WeatherForecast
            {
                Date = DateTime.Now.AddDays(index),
                TemperatureC = rng.Next(-20, 55),
                Summary = Summaries[rng.Next(Summaries.Length)]
            })
            .ToArray();
        }

Now, if you run the client a few times, you’ll see that it’s occasionally failing (actually, from this screenshot, all you can really see if that it didn’t succeed):

We know that this error is transient – just needs an F5, so we can put some code into the client to handle this:

for (int i = 1; i <= 3; i++)
{
    var result = await httpClient.GetAsync("weatherforecast");
    if (result.IsSuccessStatusCode)
    {
        var output = await result.Content.ReadAsStringAsync();
        Console.WriteLine(output);
        break;
    }
    else
    {
        Console.WriteLine("Bad things happened... ");
    }
}

Console.WriteLine("If we get here, we either succeeded or gave up ");

And we’re back in business:

The only problem here is that our code is suddenly a little cumbersome; we have a foreach loop around the code, and we’re risking each person implementing this doesn’t introduce a bug by not breaking when successful or whatever.

Before we introduce Polly as the solution, it’s probably worth mentioning that, whilst some errors can legitimately be fixed by simply trying again, in other cases this can be incredibly harmful. You should ensure that your code is idempotent. Imagine you’re creating a sales order on this call, and it fails after writing something to the database – you risk creating multiple sales orders!

Polly

Polly is basically a library built around this concept. It’s a NuGet package, so import it from here into your client. Let’s start by refactoring our code to call the service:

        private static async Task<bool> GetTheWeather(HttpClient httpClient)
        {
            var result = await httpClient.GetAsync("weatherforecast");
            if (result.IsSuccessStatusCode)
            {
                var output = await result.Content.ReadAsStringAsync();
                Console.WriteLine(output);
                return true;
            }
            else
            {
                Console.WriteLine("Bad things happened... ");
                return false;
            }
        }

Now let’s call that using Polly:

        static async Task Main(string[] args)
        {

            //https://localhost:44368/weatherforecast

            HttpClient httpClient = new HttpClient()
            {
                BaseAddress = new Uri("https://localhost:44368")
            };

            var retryPolicy = Policy
                .HandleResult<bool>(false)               
                .RetryAsync(3, (response, retryCount) =>
                  {
                      Console.WriteLine($"Received a response of {response.Result}, retrying {retryCount}.");
                  });

            var result = await retryPolicy.ExecuteAsync(() => GetTheWeather(httpClient));            

            Console.WriteLine("If we get here, we either succeeded or gave up ");
        }

All we’re doing here is telling Polly that we want a policy that acts on a return value of False (in theory, I imagine you could set this to something like .HandleResult(“Aardvark”) and have it retry while the method returned a value of Aardvark). RetryAsync sounds pretty obvious, but the Async part is very important, otherwise, you won’t be able to use ExecuteAsync… and you’ll spend a while wondering why (so I hear)!

ExecuteAsync is awaitable, so it will wrap the retry logic in this single line.

The advantage here is that you can define a retry policy for your application, or several retry policies for your application.

References

https://github.com/App-vNext/Polly

https://github.com/bryanjhogan/trydotnet-polly

Manually Adding DbContext for an Integration Test

In EF Core, there is an extension method that allows you to add a DBContext, called AddDBContext. This is a really useful method, however, in some cases, you may find that it doesn’t work for you. Specifically, if you’re trying to inject a DBContext to use for unit testing, it doesn’t allow you to access the DBContext that you register.

Take the following code:

services.AddDbContext<MyDbContext>(options =>
                options.UseSqlServer());
         

I’ve previously written about using UseInMemoryDatabase. However, this article covered unit tests only – that is, you are able to instantiate a version of the DBContext in the unit test, and use that.

As a reminder of the linked article, if you were to try to write a test that included that DBContext, you might want to use an in memory database; you might, therefore, build up a DBContextOptions like this:

var options = new DbContextOptionsBuilder<MyDbContext>()
                .UseInMemoryDatabase(Guid.NewGuid().ToString())
                .EnableSensitiveDataLogging()
                .Options;
var context = new MyDbContext(options);

But in a scenario where you’re writing an integration test, you may need to register this with the IoC. Unfortunately, in this case, AddDbContext can stand in your way. The alternative is that you can simply register the DbContext yourself:

var options = new DbContextOptionsBuilder<MyDbContext>()
                .UseInMemoryDatabase(Guid.NewGuid().ToString())
                .EnableSensitiveDataLogging()
                .Options;
var context = new MyDbContext(options);
AddMyData(context);
services.AddScoped<MyDbContext>(_ => context);

AddMyData just adds some data into your database; for example:

private void AddTestUsers(MyDbContext context)
{            
    MyData data = new MyData()
    {
        value1 = "test",
        value2 = "1"                
    };
    context.MyData.Add(subject);
    context.SaveChanges();
}

This allows you to register your own, in memory, DbContext in your IoC.

Building a list with Asp.Net Core

I’ve recently been working with Asp.Net Core to build some functionality, involving building a list of values. Typically, with Asp.Net Core using Razor, you have a form that may look something like this:

@using (Html.BeginForm("MyAction", "ControllerName", FormMethod.Post)
{
    @Html.AntiForgeryToken()
    <div class="form-group">
        @Html.LabelFor(model => model.MyValue)
        @Html.TextBoxFor(model => model.MyValue)
    </div>

    <div class="form-group">
        <button type="submit">Submit</button>
    </div>

This works really well in 90% of cases, where you want the user to enter a value and submit. This is your average CRUD application; however, what happens if, for some reason, you need to manipulate one of these values? Let’s say, for example, that you want to submit a list of values.

For the sake of simplicity, we’ll say that the controller accepts a csv, but we want to build this up before submission. You can’t simply call a controller method for two reasons: the first is that the controller will reload the page; and the second that you don’t have anywhere to put the value on the server. If this was, say, a method to create an entry in the DB, the DB entry, by definition, couldn’t exist until after the submission.

This all means that you would need to build this list on the client.

A solution

Let’s start with a very simple little feature of Html Helpers – the hidden field:

@Html.HiddenFor(model => model.MyList)

This means that we can store the value being submitted to the user, without showing it to the user.

We’ll now need to display the data being added. An easy way to do this is a very simple table (you can load existing values into the table for edit scenarios):

    <div>
        <table id="listTable">
            <tbody>
                @if ((Model?.ValueList ?? null) != null)
                {
                    @foreach (var v in Model.ValueList)
                    {
                        <tr>
                            <td>@v</td>
                        </tr>
                    }
                }
            </tbody>
        </table>
    </div>    

Pay particular attention to the Table Id and the fact that the conditional check is inside the tbody tag. Now let’s allow the user to add a new piece of data:

    <div class="form-group">
        @Html.LabelFor(model => model.NewValue)
        @Html.TextBoxFor(model => model.NewValue)
    </div>
    <div>
        <button type="button" id="add-value">Add Value</button>
    </div>

Okay, so now we have a button and a field to add the value; we also have a method of displaying those values. We’ll need a little bit of Javascript (JQuery in this case) to append to our list:

@section Scripts {
        $('#add-value').click(() => {

            const hiddenList = $('#MyList');
            const newValue = $('#NewValue');

            if (!hiddenList.val()) {
                hiddenList.val(newValue.val());
            } else {
                hiddenList.val(hiddenList.val() + ',' + newValue.val());
            }
            
            $('#listTable > tbody:last-child').append('<tr><td>' + newValue.val() + '</td></tr>');            
        });

On the button click, we get the hidden list and the new value, we then simply add the new value to the list. Finally, we manipulate the table in order to display the new value. If you F12 the page, you’ll notice that the Razor engine replaces the Html Helpers with controls that have Ids the same as the fields that they are displaying (note that if the field name contains a “.”, for example: MyClass.MyField, the Id would be MyClass_MyField).

When you now submit this, you’ll see that the hidden field contains the correct list of values.

References

https://stackoverflow.com/questions/16174465/how-do-i-update-a-model-value-in-javascript-in-a-razor-view/16174926

https://stackoverflow.com/questions/171027/add-table-row-in-jquery

https://stackoverflow.com/questions/36317362/how-to-add-an-item-to-a-list-in-a-viewmodel-using-razor-and-net-core