Tag Archives: await

Short Walks – Object Locking in C#

While playing with Azure Event Hubs, I decided that I wanted to implement a thread locking mechanism that didn’t queue. That is, I want to try and get a lock on the resource, and if it’s currently in use, just forget it and move on. The default behaviour in C# is to wait for the resource. For example, consider my method:

static async Task MyProcedure()
{
    Console.WriteLine($"Test1 {DateTime.Now}");
    await Task.Delay(5000);
    Console.WriteLine($"Test2 {DateTime.Now}");
}

I could execute this 5 times like so:

static async Task Main(string[] args)
{
    Parallel.For(1, 5, (a) =>
    {
        MyProcedure();
    });
 
    Console.ReadLine();
}

If I wanted to lock this (just bear with me and assume that makes sense for a minute), I might do this:

private static object _lock = new object();        
 
static async Task Main(string[] args)
{
    Parallel.For(1, 5, (a) =>
    {
        //MyProcedure();
        Lock();
    });
 
    Console.ReadLine();
}
 
static void Lock()
{
    Task.Run(() =>
    {
        lock (_lock)
        {
            MyProcedure().GetAwaiter().GetResult();
        }
    });
}

I re-jigged the code a bit, because you can’t await inside a lock statement, and obviously, just making the method call synchronous would not be locking the asynchronous call.

So now, I’ve successfully made my asynchronous method synchronous. Each execution of `MyProcedure` will happen sequentially, and that’s because `lock` queues the locking calls behind one another.

However, imagine the Event Hub scenario that’s referenced in the post above. I have, for example, a game, and it’s sending a large volume of telemetry up to the cloud. In my particular case, I’m sending a player’s current position. If I have a locking mechanism whereby the locks are queued then I could potentially get behind; and if that happens then, at best, the data sent to the cloud will be outdated and, at worse, it will use up game resources, potentially causing a lag.

After a bit of research, I found an alterntive:

private static object _lock = new object();        
 
static async Task Main(string[] args)
{
    Parallel.For(1, 5, (a) =>
    {
        //MyProcedure();
        //Lock();
        TestTryEnter();
    });
 
    Console.ReadLine();
}

static async Task TestTryEnter()
{
    bool lockTaken = false;
 
    try
    {
        Monitor.TryEnter(_lock, 0, ref lockTaken);
 
        if (lockTaken)
        {
            await MyProcedure();                                        
        }
        else
        {
            Console.WriteLine("Could not get lock");
        }
    }
    finally
    {
        if (lockTaken)
        {
            Monitor.Exit(_lock);
        }
    }
}

So here, I try to get the lock, and if the resource is already locked, I simply give up and go home. There are obviously a very limited number of uses for this; however, my Event Hub scenario, described above, is one of them. Depending on the type of data that you’re transmitting, it may make much more sense to have a go, and if you’re in the middle of another call, simply abandon the current one.

Short Walks – Using JoinableTaskFactory

While attending DDD North this year, I attended a talk on avoiding deadlocks in asynchronous programming. During this talk, I was introduced to the JoinableTaskFactory.

This became strangely useful very quickly, when I encountered a problem similar to that described here. There are a couple of solutions to this question, but the least code churn is to simply make the code synchronous; however, if you do that by simply adding

.GetAwaiter().GetResult()

to the end of the async calls, you’re very likely to result in a deadlock.

One possible solution is to wrap the call using the JoinableTaskFactory, in the following way:

var jtf = new JoinableTaskFactory(new JoinableTaskContext());
var result = jtf.Run<DataResult>(() => _myClass.GetDataAsync());

This allows the task to return on the same synchronisation context without causing a deadlock.

References

https://docs.microsoft.com/en-us/dotnet/api/microsoft.visualstudio.threading.joinabletaskfactory

https://stackoverflow.com/questions/33913836/how-to-render-a-partial-view-asynchronously

Asynchronous Debugging

Everyone who has spent time debugging errors in code that has multiple threads knows the pain of pressing F10 and seeing the cursor jump to a completely different part of the system (that is, everyone who has ever tried to).

There are a few tools in VS2017 that make this process slightly easier; and this post attempts to provide a brief summary. Obviously the examples in this post are massively contrived.

Errors

Let’s start with an error occurring inside a parallel loop. Here’s some code that will cause the error:

static void Main(string[] args)
{
    Console.WriteLine("Hello World!");
 
    Parallel.For(1, 10, (i) => RunProcess(i));
 
    Console.ReadLine();
}
 
static void RunProcess(int i)
{
    Task.Delay(500).GetAwaiter().GetResult();
 
    Console.WriteLine($"Running {i}");
 
    if (i == 3) throw new Exception("error");
}

For some reason, I get an error when a few of these threads have started. I need a tool that tells me some details about the local variables in the threads specifically. Enter the Parallel Watch Window:

Launch Parallel Watch Window

Figure 1 – Launch Parallel Watch Window

This gives me a familiar interface, and tells me which thread I’m currently on:

Parallel Watch Window

Figure 2 – Parallel Watch Window

However, what I really want to see is the data local to the thread; what if I put “i” in the “Add Watch” cell:

Add a watch

Figure 3 – Add a watch

As you can see, I have a horizontal list of watch expressions, so I can monitor variables in multiple threads at a time.

Flagging a thread

We know there’s an issue with one of these threads, so one possibility is to flag that thread:

Flagging a thread

Figure 4 – Flagging a thread

Then you can select to show only flagged threads:

Filter flagged threads

Figure 5 – Filter flagged threads

Freezing non-relevant threads

The flags help to only trace the threads that you care about, but if you want to only run the threads that you care about, you can freeze the other threads:

Freeze Thread

Figure 6 – Freeze Thread

Once you’ve frozen a thread, a small pause icon appears, and that thread will stop:

Frozen Thread

Figure 7 – Frozen Thread

In order to freeze other threads, simply highlight all the relevant threads (Ctrl-A) and select Freeze.

It’s worth remembering that you can’t freeze a thread that doesn’t exist yet (so your breakpoint in a Parallel.For loop might only show half the threads).

Manual thread hopping

By using freeze, you can stop the debug message from jumping between threads. You can then manually control this process by simply selecting a thread and “Switch To Frame”:

Figure 8 – Switch to Frame

You can switch to a frozen frame, but as soon as you try to progress, you’ll flip back to the first non-frozen frame (unless you thaw it). The consequence of this is that, it is possible to switch to a frozen frame, freeze all other frames and then press F10 – you’re program will then stop dead.

Stack Trace

In a single threaded application (and in a multi-threaded application), you can always view the stack trace of a given line of executing code. There is also a Parallel Stack trace:

Parallel Stacks

Figure 9 – Parallel Stacks

Selecting any given method will give us the active threads, and allow switching:

Active Threads

Figure 10 – Active Threads

Parallel Stack Trace – Task View

The above view gives you a view of the created threads for your program; but most of the time, you won’t care what threads are created; only the tasks that you’ve spawned (they are not necessarily a 1 – 1 relationship. You can simply switch the view in this window to view Tasks instead:

Task View

Figure 11 – Task View

Tasks & Threads Windows

There is a tool that allows you to view all active, blocked and scheduled tasks:

Tasks Window

Figure 12 – Tasks Window

This allows you to freeze an entire task, switch to a given task, and Freeze All But This:

Freeze All But This

Figure 13 – Freeze All But This

There is an equivalent window for Threads. It is broadly the same idea; however, it does have one feature that the Tasks window does not, and that it the ability to rename a thread:

Rename a Thread

Figure 14 – Rename a Thread

Flags

The other killer feature both of these windows have is the flag feature. Simply flag a thread, switch to it, and then select “Show Only Flagged Threads” (little flag icon). If you now remove the breakpoints, you can step through only your thread or task!

Breakpoints

So, what to do where you have a breakpoint that you might only wish to fire for a single thread? Helpfully, the breakpoints window has a filter feature:

Filter breakpoints on thread Id

Figure 15 – Filter Breakpoints

References

https://msdn.microsoft.com/en-us/library/dd554943.aspx

https://stackoverflow.com/questions/5304752/how-to-debug-a-single-thread-in-visual-studio

TaskCompletionSource

I’ve had a couple of problems recently, where I’ve had tasks or asynchronous methods and they don’t quote fit into the architecture that I find myself in. I’d come across the TaskCompletionSource before, but hadn’t realised how useful it was. Basically, a TaskCompletionSource allows you to control when a task finishes; and allows you to do so in a synchronous, or asynchronous fashion. What this gives you is precise control over when an awaited task finishes.

UWP

Consider the following code in UWP. Basically, what this does is execute an anonymous function on the UI thread:

await CoreApplication.MainView.CoreWindow.Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.High, async () => 
{
    await MyAyncFunc();
}
System.Diagnostics.Debug.WriteLine("After MyAsyncFunc");

The problem here is that executing an anonymous async function in the above scenario doesn’t work. However, using the TaskCompletionSource, we can bypass that whole conversation:

TaskCompletionSource<bool> tcs = new TaskCompletionSource<bool>();

await CoreApplication.MainView.CoreWindow.Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.High, async () => 
{
    await MyAyncFunc();
    System.Diagnostics.Debug.WriteLine("After MyAsyncFunc");

    tcs.SetResult(true);
});
await tcs.Task;

Now the function will return when the the TaskCompletionSource.SetResult has been called.

Event based

The second scenario where this is useful is where you are trying to use an event based architecture within an async / await scenario. The following example is a little contrived, but it does illustrate the point:

    class Program
    {
        private static Timer _tmr = new Timer();
        private static TaskCompletionSource<bool> _tcs;

        static void Main(string[] args)
        {
            var tmr = StartTimer();

            Console.WriteLine("Before wait...");
            tmr.Wait();

            Console.WriteLine("After wait...");
        }        

        private static async Task StartTimer()
        {            

            _tmr.Interval = 3000;
            _tmr.Elapsed += _tmr_Elapsed;
            _tmr.Start();

            _tcs = new TaskCompletionSource<bool>();
            await _tcs.Task;
        }

        private static void _tmr_Elapsed(object sender, ElapsedEventArgs e)
        {
            _tcs.SetResult(true);
        }
    }

Potentially, a more real world example of this is when you might want to wrap an API in an async/await.

Control over exactly when a task finishes, and the ability to await async void methods

The final scenario where this can be useful is where you either want to await an `async void` method, or where you have a specific part of a method or process that you want to await.

The following code illustrates how to effectively await an async void method:

    class Program
    {        
        private static TaskCompletionSource<bool> _tcs;

        static void Main(string[] args)
        {
            _tcs = new TaskCompletionSource<bool>();
            BackgroundFunction();

            _tcs.Task.Wait();

            Console.WriteLine("Done");
        }        

        private static async void BackgroundFunction()
        {
            for (int i = 1; i <= 10; i++)
            {
                Console.WriteLine($"Processing: {i}");
                await DoStuff();
            }
            _tcs.SetResult(true);
        }

        private static async Task DoStuff()
        {
            await Task.Delay(500);
            
        }

    }

Finally, here is a parallel for loop:

        static void Main(string[] args)
        {
            Parallel.For(1, 3, (i) =>
            {
                BackgroundFunction();
            });

            Console.WriteLine("Done");
        }        

Imagine that BackgroundFunction is performing a long running task where a specific condition needs to return control. There are obviously combinations of functions in the TPL (WaitAll, WhenAll, WhenAny and WhenAll), however, these rely on the whole task, or a set of tasks, completing. Again, the below example is contrived, but it illustrates the granular control over the task that you have.

        static void Main(string[] args)
        {
            _tcs = new TaskCompletionSource<bool>();

            for (int i = 1; i <= 2; i++)
            {
                BackgroundFunction();
            }

            _tcs.Task.Wait();            

            Console.WriteLine("Done");
        }        

        private static async void BackgroundFunction()
        {
            for (int i = 1; i <= 10; i++)
            {
                Console.WriteLine($"Processing: {i}");
                await DoStuff();

                if (i == 7)
                {
                    _tcs.TrySetResult(true);
                    return;                    
                }
            }            
        }

I will re-iterate again, I realise that in the above example, there are better ways to achieve this, and the example is purely for illustration.

Conclusion

Generally speaking, the simplest and most robust code comes from using the task architecture in the way it was designed: that is, use async / await inside a method that returns a Task. I’m not suggesting in this post that the methods I’ve described should replace that; but there are situations where that might not fit.

Aknowledgements

I used the following posts heavily while writing this:

Awaiting the CoreDispatcher
The Nature of TaskCompletionSource
Real life scenarios for using TaskCompletionSource?
Task Parallelism