Download file from Azure storage using Javascript

.Net is an excellent framework – if you want proof of that, try to do, even very simple things, in Javascript. It feels a bit like getting out of a Tesla and travelling back in time to drive a Robin Reliant (I’ve never actually driven either of these cars, so I don’t really know if it feels like that or not!)

If you were to, for example, want to download a file from a Blob Storage container, in .Net you’re looking at about 4 lines of strongly typed code. There’s basically nothing to do, and it consistently works. If you want to do that in Javascript, there’s a Microsoft Javascript Library.

In said library, there is a function that should get a download URL for you; it’s named getUrl:

const downloadLink = blobService.getUrl(containerName, fileId, sasKey);            

If you use this (at least, when I used this), it gave me the following error:

Signature did not match

To get around this, you can build the download link manually like this:

const downloadLink = blobUri + '/' + containerName + '/' + fileId + sasKey;

Comparing the two, the former appears to escape the question mark in the SAS.

To actually download the file, you can use this:

        // https://stackoverflow.com/questions/3749231/download-file-using-javascript-jquery
        function downloadURI(uri, name) 
        {
            var link = document.createElement("a");
            link.download = name;
            link.href = uri;
            link.click();
        }

And the final download function looks like this:

        function downloadFile(sas, storageUri,
            containerName, fileId, destinationFileName) {

            var blobService = AzureStorage.Blob.createBlobServiceWithSas(storageUri, sas);
            
            const downloadLink = storageUri +'/' + containerName + '/' + fileId + sas;

            downloadURI(downloadLink, destinationFileName);
        }

An ADR Visual Studio Tool – Part 4 – Dependency Injection

Continuing with my little series on creating a visual studio extension, in this post, I’ll talk about how to add dependency injection to your project.

If you’d like to see the whole solution for this, it can be found here.

Unity

In this post on Azure Functions, I talked about using Unity as an IoC container, in a place where an IoC container might not necessarily fit; whilst this is no longer true for Azure functions, it does appear to be for extensions – I presume because they don’t expect you to have one big enough to warrant IoC; also, even with DI, testing is very difficult, because most of what you’re doing, you’re doing to Visual Studio.

Let’s start by installing Unity in our project:

Install-Package Unity

Rules Analyser

In our case, we were analysing the project and extracting files; however, we were extracting all files; as a result, a check needed to be made to extract only markdown files. Consequently, I created a RulesAnalyser class:

    public class RulesAnalyser : IRulesAnalyser
    {
        public bool IsProjectItemNameValid(string projectItemName) =>
            projectItemName.EndsWith("md");        
    }

We could (and I did initially) instantiate that directly in the ViewModel, but that feels quite dirty.

AdrPackage

The *Package file for the extension seems to be the entry point, so we can add the unity container to here:

    public sealed class AdrPackage : AsyncPackage
    {        
        public static Lazy<IUnityContainer> UnityContainer =
         new Lazy<IUnityContainer>(() =>
         {
             IUnityContainer container = InitialiseUnityContainer();
             return container;
         });

        private static IUnityContainer InitialiseUnityContainer()
        {
            UnityContainer container = new UnityContainer();
            container.RegisterType<IRulesAnalyser, RulesAnalyser>();
            container.RegisterType<ISolutionAnalyser, SolutionAnalyser>();
            return container;
        }

        . . .

View Model

The next thing we need to do is to inject our dependencies.

        public AdrControlViewModel() 
            : this(AdrPackage.UnityContainer.Value.Resolve<IRulesAnalyser>(),
                  AdrPackage.UnityContainer.Value.Resolve<ISolutionAnalyser>())
        {}

        public AdrControlViewModel(IRulesAnalyser rulesAnalyser, ISolutionAnalyser solutionAnalyser)
        {            
            _rulesAnalyser = rulesAnalyser;
            _solutionAnalyser = solutionAnalyser;

            Scan = new RelayCommandAsync<object>(ScanCommand);
        }

And that’s it, we now have a working DI model in our project.

References

https://stackoverflow.com/questions/2875429/iunitycontainer-resolvet-throws-error-claiming-it-cannot-be-used-with-type-par

https://www.pmichaels.net/2018/02/04/using-unity-azure-functions/

I wrote a book – here’s what I learned

I’ve recently had a book published; if you’re interested in seeing what it is, you can find it here.

This post is simply a summary of things that I wished I’d known before I started writing. As a quick disclaimer, please don’t treat this article as legal or financial advice. It’s simply a list of things that I, personally, have encountered; if you’re in a similar situation, I would encourage you to seek advice in the same way that I have, but hopefully this article will give you a starting point.

I’ll also mention that I’m in the UK, so some of the points here may be specific to location (certainly the figures that I’m giving are in sterling, and rough approximations).

Choosing a Topic

Did I mention this was all subjective?

In my case, the topic was chosen for me; however, I would suggest that, where you have a choice, you pick something that isn’t time sensitive.

Let me try to illustrate what I mean: when I started writing the above book, .Net Core 3 was available only in preview, so I’d write a chapter, test the software, and then Microsoft would release a breaking change! I can’t describe in words how frustrating that is – you spend, maybe two or three weeks preparing and writing a chapter, and then you have to re-write it, almost from scratch!

Further, .Net Core 3 had a release date: obviously, once it’s released, there’s a sweet spot, where people want to read about the new tech – I can’t help but think that the new release cadence by Microsoft, whilst helpful for those using the tech, is just too fast for anyone to write any kind of lasting documentation.

Finally, think of the books that you remember in the industry: Domain Driven Design,
Design patterns : elements of reusable object-oriented software
,
Test Driven Development: By Example
; these are all books that are not technology specific – The GOF book was released in 1994, but it’s still relevant today!

Legal

Most of this section will discuss contracts.

The Contract Itself

You may want to consult a solicitor to get professional advice on this. When I asked around, I was quoted around £200 – £300 for a solicitor to review the contract and offer me advice. However, there is another option: The Society of Authors offers contract reviews as part of the membership fee (which was around £100 at the time of writing).

Insurance

If you get a book contract, the contract itself will say that you, personally, are liable for anything: if they get sued, you’re liable. There’s essentially three approaches to address this:

1. Professional Indemnity Insurance

This is a private insurance policy that you take out, against being sued. It is not cheap, and you should account for this expense when you decide to embark on writing a book. My impression is that you are very unlikely to actually be sued: the book itself will have a disclaimer against any accidental damage; plus, providing you don’t simply copy and paste blog posts and put them in your book without attribution (that is, your work must be original, or have the permission of the author to replicate it), you’re unlikely to fall foul of any copywrite issues.

2. A Limited Company

I’ve done a lot of research into this and, to be honest, I’m still not completely sure about it. A limited company has limited liability, so if you were to be sued, providing that you signed the contract on behalf of the company, you, personally, should be safe. I have, however, seen (and received) advice saying that the director of the limited company may bear some personal liability for any losses to the company.

Additionally, setting up a limited company is not a cheap option: although setting up the company itself only costs ~£10 (in the UK), you must submit company accounts – you can actually go to prison if you get that wrong – so you may end up forking out for an accountant (budget between £500 – £1000 / year for that!)

3. Do Nothing

I strongly suspect this is what most people do. Technically it does leave you personally liable.

Financial

Once you get offered a book contract, you will be given an advance. In addition, you will be entitled to royalties of any book sales. Here’s how that works:

Imagine your advance is £1000, and you get 10% from book sales. Let’s say the book is consistently £50 retail.

After the first 200 books have sold, your royalties will reach your advance – meaning that you will start to receive some money per sale.

If the book sells 150 copies, then you still receive the advance, but no further money.

Remember as well that any money that you earn needs to be declared to HMRC – so you’ll need to request a self-assessment.

Employment

If you have a full time job, it’s worth bearing in mind that writing and publishing a book is probably in breach of your contract; consequently, you’ll have to speak to your employer before you sign any contract.

Editing

My process for editing my blog posts (and any fiction that I write – some of which can be seen here) is that I write them in One Note, or Word, and then I transfer them to the WordPress site, come back in about an hour and read the preview – if it looks good to me, then it goes out.

The editing process I encountered with a publisher was, obviously, different. There seems to be a formula to the layout that they require. For example, if you have a series of actions, they like them to be in numbered steps. I assume to a greater or lesser extent, every publisher does essentially the same thing.

It’s also worth bearing in mind that the editing process might change some of your text without your consent – you need to be aware this is a possibility. It’s also a very likely possibility that, some months after you’ve finished on a chapter, you’ll be asked to revisit it. On occasion, I found myself following my own chapter to try and remember some of the material – which I suppose is a good thing: like when you go searching for something on the internet, and you come across your own post!

Promotion

One way or another, you’re going to have to take an interest in promoting your work. If you’re a fiction writer, that means book signings, etc. However, if you publish a tech book, that means talks, podcasts, and blog posts (such as this)! While I have done a flash talk on (essentially this blog post), I would probably advise against giving a talk on “My Book”, rather, pick a subject, and simply mention that you have also written a book (if the subject you chose is in the book then that’s probably better, but not essential!)

Summary

If you do decide to embark on writing a book, it is hugely rewarding, you learn a lot of things, and make new contacts. However, be prepared for some late nights and lost weekends. Also, don’t be under the impression that you can make any money out of this: you’re very likely to be out of pocket by the time you finish.

An ADR Visual Studio Tool – Part 3 – Listing and Reading the files

In this post, I refactored a VS Extension Plug-in, that I originally started here.

We’ll get the plug-in to list the items found in the projects, and read the contents of the files. The source code for this can be found here. I won’t be listing all the source code in this article (most of it is just simple WPF and View Model binding).

To look through all the projects and folders in the solution, we’ll need to recursively scan all the files, and then read them; let’s have a look at what such a method might look like:

        private async Task ScanProjectItems(ProjectItems projectItems, ProjectData projectData)
        {
            await Microsoft.VisualStudio.Shell.ThreadHelper.JoinableTaskFactory.SwitchToMainThreadAsync();

            foreach (EnvDTE.ProjectItem pi in projectItems)
            {
                if (pi.IsKind(ProjectItemTypes.SOLUTION_FOLDER, 
                              ProjectItemTypes.PROJECT_FOLDER,
                              ProjectItemTypes.SOLUTION_ITEM)
                    && pi.ProjectItems != null)
                {                    
                    await ScanProjectItems(pi.ProjectItems, projectData);
                    return;
                }

                string text = await GetDocumentText(pi);
                if (string.IsNullOrWhiteSpace(text)) continue;

                projectData.Items.Add(new Models.ProjectItem()
                {
                    Name = pi.Name,
                    Data = text
                });
            }
        }

I wanted to look specifically into two aspects of this method: IsKind() and GetDocumentText(). None of the rest of this is particularly exciting.

Kind of File

In a VS Extension, you can read ProjectItems – they represent pretty much anything in the solution, and so it’s necessary to be able to find out exactly what the type is. As you can see above, I have an extension method, which was taken from here. Let’s have a quick look at the file that defines the ProjectItemTypes:

    public static class ProjectItemTypes
    {
        public const string MISC = "{66A2671D-8FB5-11D2-AA7E-00C04F688DDE}";
        public const string SOLUTION_FOLDER = "{66A26720-8FB5-11D2-AA7E-00C04F688DDE}";
        public const string SOLUTION_ITEM = "{66A26722-8FB5-11D2-AA7E-00C04F688DDE}";                                            
        public const string PROJECT_FOLDER = "{6BB5F8EF-4483-11D3-8BCF-00C04F8EC28C}";        
    }

I’m sure there’s a better way, but after I realised what Mads was doing in the above linked project, I just stuck a breakpoint in the code, and copied the “Kind” guid from there! The IsKind method is taken from the same codebase:

        public static bool IsKind(this ProjectItem projectItem, params string[] kindGuids)
        {
            Microsoft.VisualStudio.Shell.ThreadHelper.ThrowIfNotOnUIThread();

            foreach (var guid in kindGuids)
            {
                if (projectItem.Kind.Equals(guid, StringComparison.OrdinalIgnoreCase))
                    return true;
            }

            return false;
        }

As you can see, it’s almost not worth mentioning – except that the extensions are very particular about running in the UI thread, so you’ll find ThrowIfNotOnUIThread scattered around your code like confetti!

Reading File Contents

If you need to access the file contents in an extension, one way is to convert the project item document to a TextDocument, and then use Edit Points:

        public static async Task<string> GetDocumentText(this ProjectItem projectItem)
        {
            if (projectItem == null) return string.Empty;
            await Microsoft.VisualStudio.Shell.ThreadHelper.JoinableTaskFactory.SwitchToMainThreadAsync();

            try
            {
                TextDocument textDocument;
                if (!projectItem.IsOpen)
                {
                    var doc = projectItem.Open(EnvDTE.Constants.vsViewKindCode);
                    textDocument = (TextDocument)doc.Document.Object("TextDocument");
                }
                else
                {
                    textDocument = (TextDocument)projectItem.Document.Object("TextDocument");
                }
                EditPoint editPoint = textDocument.StartPoint.CreateEditPoint();
                return editPoint.GetText(textDocument.EndPoint);
            }
            catch (Exception)
            {
                return string.Empty;
            }
        }

Edit Point are much more powerful that this, they allow you to change the text in a document; for example, imagine your extension needed to change every local pascal cased variable into one with an underscore (myVariable to _myVariable), you may choose to use edit points there.

References

https://www.csharpcodi.com/csharp-examples/EnvDTE.Document.Object(string)/

https://github.com/madskristensen/MarkdownEditor/

An ADR Visual Studio Tool – Part 2 – Refactoring

A short while ago, I wrote an article about how to create a new extension for Visual Studio. The end target of this is to have a tool that will allow easy management of ADR Records.

In this post, I’m going to clean up some of the code in that initial sample. There’s nothing new here, just some basic WPF good practices. If you’re interested in downloading, or just seeing the code, it’s here.

What’s wrong with what was there?

The extension worked (at least as far as it went), but it used code behind to execute the functionality. This means that the logic and the UI are tightly coupled. My guess is that soon (maybe as part of .Net 5) the extensions will move over to another front end tech (i.e. not WPF), which means that people that have written extensions may need to re-write them. This is a guess – I don’t know any more than you do.

Onto the refactoring… Starting with MVVM basics

Let’s start with a simple View Model; previously, the code was in the code behind, so we’ll move that all over to a view model:

    public class AdrControlViewModel : INotifyPropertyChanged
    {
        public AdrControlViewModel()
        {
            Scan = new RelayCommandAsync<object>(ScanCommand);
        }

        private string _summary;
        public string Summary 
        { 
            get => _summary; 
            set => UpdateField(ref _summary, value); 
        }

        public RelayCommandAsync<object> Scan { get; set; }

        private async Task ScanCommand(object arg)
        {
            var solutionAnalyser = new SolutionAnalyser();
            Summary = await solutionAnalyser.ScanSolution();            
        }
    }

You’ll also need the following INotifyPropertyChanged boilerplate code:

        #region INotifyPropertyChanged
        public event PropertyChangedEventHandler PropertyChanged;

        protected void OnPropertyChanged([CallerMemberName]string fieldName = null) =>        
            PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(fieldName));

        private void UpdateField<T>(ref T field, T value, [CallerMemberName]string fieldName = null)
        {
            field = value;
            OnPropertyChanged(fieldName);
        }
        #endregion

One day, this can go into a base class, if we ever create a second View Model. We’ll come back to SolutionAnalyser in a while. I shamelessly pilfered the RelayCommand code from here. Finally, I did a bit of shuffling around:

Finally, the code behind needs to be changed as follows:

    public partial class AdrControl : UserControl
    {
        /// <summary>
        /// Initializes a new instance of the <see cref="AdrControl"/> class.
        /// </summary>
        public AdrControl()
        {
            this.InitializeComponent();
            DataContext = new AdrControlViewModel();
        }   
    }

SolutionAnalyser

This is, essentially, the only real code that actually does anything. It’s likely to be severely refactored in a later incarnation, but for now, it’s just in its own class:

    public class SolutionAnalyser
    {
        internal async Task<string> ScanSolution()
        {
            try
            {
                await ThreadHelper.JoinableTaskFactory.SwitchToMainThreadAsync();
                var dte = (DTE)Package.GetGlobalService(typeof(DTE));

                var sln = Microsoft.Build.Construction.SolutionFile.Parse(dte.Solution.FullName);
                string summaryText = $"{sln.ProjectsInOrder.Count.ToString()} projects";

                foreach (Project p in dte.Solution.Projects)
                {
                    summaryText += $"{Environment.NewLine} {p.Name} {p.ProjectItems.Count}";
                }
                return summaryText;
            }
            catch
            {
                return "Solution is not ready yet.";
            }            
        }
    }

What’s next?

The next stage is to introduce a search and create facility. I’m going to start creating some issues in the GitHub repo when I get some time.

Fault Resilience in Web Calls Using Polly

Some time ago, I was working on a project using WCF RPC calls. We discovered that of the calls that we made, around 10 – 20% were failing. Once we discovered this, I implemented a retry policy to try and cope with the flaky network. I didn’t realise at the time (or maybe it didn’t exist at the time) that there’s a library out there that does that for you: Polly.

To test this out, let’s create a bog standard API Web Project; from Powershell create a directory; e.g.:

mkdir PollyDemo

Then create a Web API project:

dotnet new webapi -n UnreliableApi

Let’s add a client project console app:

    static async Task Main(string[] args)
    {

        HttpClient httpClient = new HttpClient()
        {
            BaseAddress = new Uri("https://localhost:44368")
        };

        var result = await httpClient.GetAsync("weatherforecast");
        if (result.IsSuccessStatusCode)
        {
            var output = await result.Content.ReadAsStringAsync();
            Console.WriteLine(output);
        }
    }

This should call the API fine (using the default API project you get a weather forecaster). Let’s now introduce a level of entropy into the system. In the server, let’s add the following code:

        static Random _rnd = new Random();

        [HttpGet]
        public IEnumerable<WeatherForecast> Get()
        {
            // 1 in 3 chance of an error
            if (_rnd.Next(3) == 1)
            {
                throw new Exception("Bad stuff happened here");
            }

            var rng = new Random();
            return Enumerable.Range(1, 5).Select(index => new WeatherForecast
            {
                Date = DateTime.Now.AddDays(index),
                TemperatureC = rng.Next(-20, 55),
                Summary = Summaries[rng.Next(Summaries.Length)]
            })
            .ToArray();
        }

Now, if you run the client a few times, you’ll see that it’s occasionally failing (actually, from this screenshot, all you can really see if that it didn’t succeed):

We know that this error is transient – just needs an F5, so we can put some code into the client to handle this:

for (int i = 1; i <= 3; i++)
{
    var result = await httpClient.GetAsync("weatherforecast");
    if (result.IsSuccessStatusCode)
    {
        var output = await result.Content.ReadAsStringAsync();
        Console.WriteLine(output);
        break;
    }
    else
    {
        Console.WriteLine("Bad things happened... ");
    }
}

Console.WriteLine("If we get here, we either succeeded or gave up ");

And we’re back in business:

The only problem here is that our code is suddenly a little cumbersome; we have a foreach loop around the code, and we’re risking each person implementing this doesn’t introduce a bug by not breaking when successful or whatever.

Before we introduce Polly as the solution, it’s probably worth mentioning that, whilst some errors can legitimately be fixed by simply trying again, in other cases this can be incredibly harmful. You should ensure that your code is idempotent. Imagine you’re creating a sales order on this call, and it fails after writing something to the database – you risk creating multiple sales orders!

Polly

Polly is basically a library built around this concept. It’s a NuGet package, so import it from here into your client. Let’s start by refactoring our code to call the service:

        private static async Task<bool> GetTheWeather(HttpClient httpClient)
        {
            var result = await httpClient.GetAsync("weatherforecast");
            if (result.IsSuccessStatusCode)
            {
                var output = await result.Content.ReadAsStringAsync();
                Console.WriteLine(output);
                return true;
            }
            else
            {
                Console.WriteLine("Bad things happened... ");
                return false;
            }
        }

Now let’s call that using Polly:

        static async Task Main(string[] args)
        {

            //https://localhost:44368/weatherforecast

            HttpClient httpClient = new HttpClient()
            {
                BaseAddress = new Uri("https://localhost:44368")
            };

            var retryPolicy = Policy
                .HandleResult<bool>(false)               
                .RetryAsync(3, (response, retryCount) =>
                  {
                      Console.WriteLine($"Received a response of {response.Result}, retrying {retryCount}.");
                  });

            var result = await retryPolicy.ExecuteAsync(() => GetTheWeather(httpClient));            

            Console.WriteLine("If we get here, we either succeeded or gave up ");
        }

All we’re doing here is telling Polly that we want a policy that acts on a return value of False (in theory, I imagine you could set this to something like .HandleResult(“Aardvark”) and have it retry while the method returned a value of Aardvark). RetryAsync sounds pretty obvious, but the Async part is very important, otherwise, you won’t be able to use ExecuteAsync… and you’ll spend a while wondering why (so I hear)!

ExecuteAsync is awaitable, so it will wrap the retry logic in this single line.

The advantage here is that you can define a retry policy for your application, or several retry policies for your application.

References

https://github.com/App-vNext/Polly

https://github.com/bryanjhogan/trydotnet-polly

Manually Adding DbContext for an Integration Test

In EF Core, there is an extension method that allows you to add a DBContext, called AddDBContext. This is a really useful method, however, in some cases, you may find that it doesn’t work for you. Specifically, if you’re trying to inject a DBContext to use for unit testing, it doesn’t allow you to access the DBContext that you register.

Take the following code:

services.AddDbContext<MyDbContext>(options =>
                options.UseSqlServer());
         

I’ve previously written about using UseInMemoryDatabase. However, this article covered unit tests only – that is, you are able to instantiate a version of the DBContext in the unit test, and use that.

As a reminder of the linked article, if you were to try to write a test that included that DBContext, you might want to use an in memory database; you might, therefore, build up a DBContextOptions like this:

var options = new DbContextOptionsBuilder<MyDbContext>()
                .UseInMemoryDatabase(Guid.NewGuid().ToString())
                .EnableSensitiveDataLogging()
                .Options;
var context = new MyDbContext(options);

But in a scenario where you’re writing an integration test, you may need to register this with the IoC. Unfortunately, in this case, AddDbContext can stand in your way. The alternative is that you can simply register the DbContext yourself:

var options = new DbContextOptionsBuilder<MyDbContext>()
                .UseInMemoryDatabase(Guid.NewGuid().ToString())
                .EnableSensitiveDataLogging()
                .Options;
var context = new MyDbContext(options);
AddMyData(context);
services.AddScoped<MyDbContext>(_ => context);

AddMyData just adds some data into your database; for example:

private void AddTestUsers(MyDbContext context)
{            
    MyData data = new MyData()
    {
        value1 = "test",
        value2 = "1"                
    };
    context.MyData.Add(subject);
    context.SaveChanges();
}

This allows you to register your own, in memory, DbContext in your IoC.

Building a list with Asp.Net Core

I’ve recently been working with Asp.Net Core to build some functionality, involving building a list of values. Typically, with Asp.Net Core using Razor, you have a form that may look something like this:

@using (Html.BeginForm("MyAction", "ControllerName", FormMethod.Post)
{
    @Html.AntiForgeryToken()
    <div class="form-group">
        @Html.LabelFor(model => model.MyValue)
        @Html.TextBoxFor(model => model.MyValue)
    </div>

    <div class="form-group">
        <button type="submit">Submit</button>
    </div>

This works really well in 90% of cases, where you want the user to enter a value and submit. This is your average CRUD application; however, what happens if, for some reason, you need to manipulate one of these values? Let’s say, for example, that you want to submit a list of values.

For the sake of simplicity, we’ll say that the controller accepts a csv, but we want to build this up before submission. You can’t simply call a controller method for two reasons: the first is that the controller will reload the page; and the second that you don’t have anywhere to put the value on the server. If this was, say, a method to create an entry in the DB, the DB entry, by definition, couldn’t exist until after the submission.

This all means that you would need to build this list on the client.

A solution

Let’s start with a very simple little feature of Html Helpers – the hidden field:

@Html.HiddenFor(model => model.MyList)

This means that we can store the value being submitted to the user, without showing it to the user.

We’ll now need to display the data being added. An easy way to do this is a very simple table (you can load existing values into the table for edit scenarios):

    <div>
        <table id="listTable">
            <tbody>
                @if ((Model?.ValueList ?? null) != null)
                {
                    @foreach (var v in Model.ValueList)
                    {
                        <tr>
                            <td>@v</td>
                        </tr>
                    }
                }
            </tbody>
        </table>
    </div>    

Pay particular attention to the Table Id and the fact that the conditional check is inside the tbody tag. Now let’s allow the user to add a new piece of data:

    <div class="form-group">
        @Html.LabelFor(model => model.NewValue)
        @Html.TextBoxFor(model => model.NewValue)
    </div>
    <div>
        <button type="button" id="add-value">Add Value</button>
    </div>

Okay, so now we have a button and a field to add the value; we also have a method of displaying those values. We’ll need a little bit of Javascript (JQuery in this case) to append to our list:

@section Scripts {
        $('#add-value').click(() => {

            const hiddenList = $('#MyList');
            const newValue = $('#NewValue');

            if (!hiddenList.val()) {
                hiddenList.val(newValue.val());
            } else {
                hiddenList.val(hiddenList.val() + ',' + newValue.val());
            }
            
            $('#listTable > tbody:last-child').append('<tr><td>' + newValue.val() + '</td></tr>');            
        });

On the button click, we get the hidden list and the new value, we then simply add the new value to the list. Finally, we manipulate the table in order to display the new value. If you F12 the page, you’ll notice that the Razor engine replaces the Html Helpers with controls that have Ids the same as the fields that they are displaying (note that if the field name contains a “.”, for example: MyClass.MyField, the Id would be MyClass_MyField).

When you now submit this, you’ll see that the hidden field contains the correct list of values.

References

https://stackoverflow.com/questions/16174465/how-do-i-update-a-model-value-in-javascript-in-a-razor-view/16174926

https://stackoverflow.com/questions/171027/add-table-row-in-jquery

https://stackoverflow.com/questions/36317362/how-to-add-an-item-to-a-list-in-a-viewmodel-using-razor-and-net-core

Create and Test an MSix Installation

I’ve previously written about the new Msix packaging project here. One thing that I didn’t cover in that post is that, whilst the process described there will allow you to create an Msix package, you will not be able to deploy it on your own machine. In fact, you’ll likely get an error such as this if you try:

App installation failed with error message: The current user has already installed an unpackaged version of this app. A packaged version cannot replace this. The conflicting package is 027b8cb5-10c6-42b7-bd06-828fad8e3dfb and it was published by CN=pcmic.

Because this has run on your machine, there’s a conflict with the installation. Fortunately, removing the installed version is quite easy; first, copy the package name (indicated below):

Launch a copy of powershell (as admin) and enter the following command:

Get-AppxPackage -name [packagename] -AllUsers

In my case, that would be:

Get-AppxPackage -name 027b8cb5-10c6-42b7-bd06-828fad8e3dfb -AllUsers

You’ll then see something similar to the following (copy the PackageFullName):

Now you can remove the package:

Remove-AppxPackage -package [PackageFullName] -AllUsers

In my case:

Remove-AppxPackage -package 027b8cb5-10c6-42b7-bd06-828fad8e3dfb_0.2.5.0_x64__sqbt0zj9e43cj -AllUsers

Unfortunately, you don’t get any indication this has worked, so type the get command again:

Get-AppxPackage -name 027b8cb5-10c6-42b7-bd06-828fad8e3dfb -AllUsers

And you should see that nothing is returned. Now, when you run it, it should be fine:

References

https://developercommunity.visualstudio.com/content/problem/198610/another-user-has-already-installed-an-unpackaged-v.html

Upgrade a .Net Framework WPF Application to .Net Core 3.x

One of the main things that was introduced as part of .Net Core 3 was the ability to upgrade your WinForms or WPF application to use Core. I have an example of such an application here. This was upgraded using the side-by-side method. If you are upgrading, you essentially have 2 options: Big Bang and Side-by-Side.

Big Bang Upgrade

This is the process by which you edit your csproj file on the framework app, and convert that file to use .Net Core 3. This means potentially less work, and is suited for a situation where you know that you will never need the original application again. For a personal project this may be fine, but realistically, it too big a risk for most companies, who would want the security of a gradual rollout, and the ability to fix bugs and make interim releases in the meantime.

Side-by-Side Upgrade

There are three ways to do this, but essentially, what you’re doing here is creating a second project that is running .Net Core. The benefit here is that the two applications can continue to run, and you can gradually discontinue the Framework app as and when you see fit. The work involved can be greater; but it varies depending on your methodology and requirements.

1. Copy and Paste

This is the simplest method: create a brand new .Net Core WPF application, and copy the entire contents of your directory across. You’ll need to convert the packages (covered later), but other than that, you should be good to go. Obviously, that depends hugely on the complexity of your project.

The downside here is that if you fix a bug, or make a change to one of these projects, you either need to do it twice, or have them get out of sync.

2. Two Projects One Directory

This seems to be Microsoft’s preferred approach: the idea being that you create a new .Net Core project inside the same directory as the .Net Framework app. This means that all the files just appear in your framework app. You’ll need to convert the packages, and exclude the csproj, and other .Net Framework specific files from the .Net Core project. This, and the following approach both give you the ability to change the code in both files simultaneously.

3. Two Projects Linked Files

This is my personal preference. You create your .Net Core project it a different directory and just link the files and directories. You get all the benefits of having the projects in the same directory, but without the hassle of trying to manage files being there that you don’t want.

The downside to this approach is that you need to include the files yourself.

Two Projects Linked Files Upgrade

The following steps, whilst for this particular approach, are not specific to it, unless stated.

1. Start by installing the UWP Workload in Visual Studio, assuming you haven’t already.

2. In your WPF Framework app, convert your packages.config, as that doesn’t exist in .Net Core:

3. Create a new project. Whilst this is specific to this approach, you will need a new project for any of the side-by-side methods.

For this method, the project needs to be in a different directory; my suggestion is that you put it inside the solution directory, under its own folder; for example, in the example above, you might create: WpfCoreApp1:

The directory structure might look like this:

4. Copy the package references from your packages.config directly into the new csproj file (following step 1, this should be a simple copy and paste).

5. Gut the new project by removing MainWindow.xaml and App.xaml (from here on in, all of the steps are specific to this method):

6. Edit the new csproj file. Depending on your directory structure, the following may differ, but broadly speaking you need the following code in your new csproj file:

<Project Sdk="Microsoft.NET.Sdk.WindowsDesktop">
  <PropertyGroup>
    <OutputType>WinExe</OutputType>
    <TargetFramework>netcoreapp3.1</TargetFramework>
    <UseWPF>true</UseWPF>
  </PropertyGroup>
  <ItemGroup>
    <ApplicationDefinition Include="..\WpfApp1\App.xaml" Link="App.xaml">
      <Generator>MSBuild:Compile</Generator>
    </ApplicationDefinition>
    <Compile Include="..\WpfApp1\App.Xaml.cs" Link="App.Xaml.cs" />
    <Page Include="..\WpfApp1\MainWindow.xaml" Link="MainWindow.xaml">
      <Generator>MSBuild:Compile</Generator>
    </Page>
    <Compile Include="..\WpfApp1\MainWindow.Xaml.cs" Link="MainWindow.Xaml.cs" />
  </ItemGroup>
</Project>

If, for example, you were to have a directory that you wished to bring across, you could use something similar to the following:

  <ItemGroup>
    <Compile Include="..\WpfApp1\Helpers\**">
      <Link>Helpers\%(Filename)%(Extension)</Link>
    </Compile>
</ItemGroup>

That’s it – you should now be able to set your new .Net Core project as start-up and run it. The code is the same code as that running the Framework app, and should you change either, it will affect the other.

As an addendum, here is a little chart that I think should help you decide which approach to take for an upgrade: