Monthly Archives: December 2017

Getting Started With iOS for a C# Programmer – Part Four – Controlling an Object

In this post, we covered how to move an object on the screen; this time, we’re going to control it. However, before we do, let’s change the game orientation to landscape only, as that’s really the only thing that makes sense on an iPhone for this type of game.

Landscape

The first thing to do is to change the code in the GameViewController to force the orienetation to landscape:

    override var shouldAutorotate: Bool {
        return true
    }

    override var supportedInterfaceOrientations: UIInterfaceOrientationMask {
        if UIDevice.current.userInterfaceIdiom == .phone || UIDevice.current.userInterfaceIdiom == .pad {
            return .landscape
        } else {
            return .all
        }
    }

When you run the simulator, the game will now appear on its side; to rectify that, select Hardware -> Rotate Left*:

Control

As you may have noticed from the previous post, we do have a modicum of control over the rectangle; let’s now change that so that we can press to the left and have it move left, or on the right and have it move right.

Left and Right

It turns out that this is pretty easy once you understand how the co-ordinates work in Swift:

    override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
        for t in touches {
            self.touchDown(atPoint: t.location(in: player!))
        }
    }

And the touchDown function:

func touchDown(atPoint pos : CGPoint) {
        print (pos.x, pos.y)
        
        var halfWidth = (player?.frame.width)! / 2;
        
        if (pos.x < -halfWidth) {
            playerSpeed? -= CGFloat(1.0);
        } else if (pos.x > halfWidth) {
            playerSpeed? += CGFloat(1.0);
        }
}

Finally, we need the update event to do something with the playerSpeed:

    override func update(_ currentTime: TimeInterval) {
        // Called before each frame is rendered
        player?.position.x += playerSpeed!
    }

Now we have a game where we can move the player left and right.

Tidy up

To clean up the screen before we get into firing, it would be nice if the player was lower down the screen, and a slightly different colour:

func CreatePlayer() -> SKShapeNode {
        
        let playerSize = CGSize(width: 100, height: 10)
        
        let player = SKShapeNode(rectOf: playerSize)
        print(self.frame.minY, self.frame.maxY, self.frame.minX, self.frame.maxX, self.frame.width, self.frame.height)
        player.position = CGPoint(x:self.frame.midX, y:-150)
        player.strokeColor = SKColor(red: 0.0/255.0, green: 0.0/255.0, blue: 200.0/255.0, alpha: 1.0)
        player.lineWidth = 1
        
        player.physicsBody = SKPhysicsBody(rectangleOf: playerSize)
        
        player.physicsBody?.affectedByGravity = false
        player.physicsBody?.isDynamic = true
            
        return player
    }
    

Remember, this is a landscape only game.

Fire

Okay – so firing is just an extra clause in our touchDown function:

    
    func touchDown(atPoint pos : CGPoint) {
        print (pos.x, pos.y)
        
        var halfWidth = (player?.frame.width)! / 2;
        
        if (pos.x < -halfWidth) {
            playerSpeed? -= CGFloat(1.0);
        } else if (pos.x > halfWidth) {
            playerSpeed? += CGFloat(1.0);
        } else {
            Fire()
        }
    }

    func Fire() {
        let bullet = CreateBullet(point: (player?.position)!)
        self.addChild(bullet)
    }

    func CreateBullet(point : CGPoint) -> SKShapeNode {
        
        let bulletSize = CGSize(width: 1, height: 10)
        
        let bullet = SKShapeNode(rectOf: bulletSize)
        //print(self.frame.minY, self.frame.maxY, self.frame.minX, self.frame.maxX, self.frame.width, self.frame.height)
        bullet.position = point
        bullet.strokeColor = SKColor(red: 0.0/255.0, green: 0.0/255.0, blue: 200.0/255.0, alpha: 1.0)
        bullet.lineWidth = 4
        
        bullet.physicsBody = SKPhysicsBody(rectangleOf: bulletSize)
        
        bullet.physicsBody?.affectedByGravity = false
        bullet.physicsBody?.isDynamic = true
        
        bullet.name = "bullet"
        
        return bullet
    }

All we’ve actually done here is to create a new rectangle, and sourced it right at the centre of the player. We’ve added it to the self construct, so the automatic rendering will pick it up; next, we need to move it:

    override func update(_ currentTime: TimeInterval) {
        // Called before each frame is rendered
        player?.position.x += playerSpeed!
        
        self.enumerateChildNodes(withName: "bullet") {
            (node, _) in
            node.position.y += 1
        }
    }

Footnotes

* Remember, when holding an iPhone or iPad, the button should always be on the right hand side – without wishing to judge, anyone that feels different is wrong, and very possibly evil.

References

https://stackoverflow.com/questions/475553/how-can-i-test-landscape-view-using-the-iphone-simulator

Working with Multiple Cloud Providers – Part 2 – Getting Data Into BigQuery

In this post, I described how we might attempt to help Santa and his delivery drivers to deliver presents to every child in the world, using the combined power of Google and Microsoft.

In this, the second part of the series (there will be one more), I’m going to describe how we might set-up a GCP pipeline that feeds that data into BigQuery (Google’s BigData NoSQL warehouse offering). We’ll first set up BigQuery, then the PubSub topic, and finally, we’ll set-up the dataflow, ready for Part 3, which will be joining the two systems together.

BigQuery

Once you navigate to the BigQuery section of the GCP console, you’ll be able to create a Dataset:

You can now set-up a new table. As this is an illustration, we’ll keep it as simple as possible, but you can see that this might be much more complex:

One thing to bear in mind about BigQuery, and cloud data storage in general is that, often, it makes sense to de-normalise your data – storage is often much cheaper than CPU time.

PubSub

Now we have somewhere to put the data; we could simply have the Azure function write the data into BigQuery. However, we might then run into problems if the data flow suddenly spiked. For this reason, Google recommends the use of PubSub as a shock absorber.

Let’s create a PubSub topic. I’ve written in more detail on this here:

DataFlow

The last piece of the jigsaw is Dataflow. Dataflow can be used for much more complex tasks than to simply take data from one place and put it in another, but in this case, that’s all we need. Before we can set-up a new dataflow job, we’ll need to create a storage bucket:

We’ll create the bucket as Regional for now:

Remember that the bucket name must be unique (so no-one can ever pick pcm-data-flow-bucket again!)

Now, we’ll move onto the DataFlow itself. We get a number of dataflow templates out of the box; and we’ll use one of those. Let’s launch dataflow from the console:

Here we create a new Dataflow job:

We’ll pick “PubSub to BigQuery”:

You’ll then get asked for the name of the topic (which was created earlier) and the storage bucket (again, created earlier); you’re form should look broadly like this when you’re done:

I strongly recommend specifying a maximum number of workers, at least while you’re testing.

Testing

Finally, we’ll test it. PubSub allows you to publish a message:

Next, visit the Dataflow to see what’s happening:

Looks interesting! Finally, in BigQuery, we can see the data:

Summary

We now have the two separate cloud systems functioning independently. Step three will be to join them together.

Working with Multiple Cloud Providers – Part 1 – Azure Function

Regular readers (if there are such things to this blog) may have noticed that I’ve recently been writing a lot about two main cloud providers. I won’t link to all the articles, but if you’re interested, a quick search for either Azure or Google Cloud Platform will yield several results.

Since it’s Christmas, I thought I’d do something a bit different and try to combine them. This isn’t completely frivolous; both have advantages and disadvantages: GCP is very geared towards big data, whereas the Azure Service Fabric provides a lot of functionality that might fit well with a much smaller LOB app.

So, what if we had the following scenario:

Santa has to deliver presents to every child in the world in one night. Santa is only one man* and Google tells me there are 1.9B children in the world, so he contracts out a series of delivery drivers. There needs to be around 79M deliveries every hour, let’s assume that each delivery driver can work 24 hours**. Each driver can deliver, say 100 deliveries per hour, that means we need around 790,000 drivers. Every delivery driver has an app that links to their depot; recording deliveries, schedules, etc.

That would be a good app to write in, say, Xamarin, and maybe have an Azure service running it; here’s the obligatory box diagram:

The service might talk to the service bus, might control stock, send e-mails, all kinds of LOB jobs. Now, I’m not saying for a second that Azure can’t cope with this, but what if we suddenly want all of these instances to feed metrics into a single data store. There’s 190*** countries in the world; if each has a depot, then there’s ~416K messages / hour going into each Azure service. But there’s 79M / hour going into a single DB. Because it’s Christmas, let assume that Azure can’t cope with this, or let’s say that GCP is a little cheaper at this scale; or that we have some Hadoop jobs that we’d like to use on the data. In theory, we can link these systems; which might look something like this:

So, we have multiple instances of the Azure architecture, and they all feed into a single GCP service.

Disclaimer

At no point during this post will I attempt to publish 79M records / hour to GCP BigQuery. Neither will any Xamarin code be written or demonstrated – you have to use your imagination for that bit.

Proof of Concept

Given the disclaimer I’ve just made, calling this a proof of concept seems a little disingenuous; but let’s imagine that we know that the volumes aren’t a problem and concentrate on how to link these together.

Azure Service

Let’s start with the Azure Service. We’ll create an Azure function that accepts a HTTP message, updates a DB and then posts a message to Google PubSub.

Storage

For the purpose of this post, let’s store our individual instance data in Azure Table Storage. I might come back at a later date and work out how and whether it would make sense to use CosmosDB instead.

We’ll set-up a new table called Delivery:

Azure Function

Now we have somewhere to store the data, let’s create an Azure Function App that updates it. In this example, we’ll create a new Function App from VS:

In order to test this locally, change local.settings.json to point to your storage location described above.

And here’s the code to update the table:

    public static class DeliveryComplete
    {
        [FunctionName("DeliveryComplete")]
        public static HttpResponseMessage Run(
            [HttpTrigger(AuthorizationLevel.Function, "post", Route = null)]HttpRequestMessage req, 
            TraceWriter log,            
            [Table("Delivery", Connection = "santa_azure_table_storage")] ICollector<TableItem> outputTable)
        {
            log.Info("C# HTTP trigger function processed a request.");
 
            // parse query parameter
            string childName = req.GetQueryNameValuePairs()
                .FirstOrDefault(q => string.Compare(q.Key, "childName", true) == 0)
                .Value;
 
            string present = req.GetQueryNameValuePairs()
                .FirstOrDefault(q => string.Compare(q.Key, "present", true) == 0)
                .Value;            
 
            var item = new TableItem()
            {
                childName = childName,
                present = present,                
                RowKey = childName,
                PartitionKey = childName.First().ToString()                
            };
 
            outputTable.Add(item);            
 
            return req.CreateResponse(HttpStatusCode.OK);
        }
 
        public class TableItem : TableEntity
        {
            public string childName { get; set; }
            public string present { get; set; }
        }
    }

Testing

There are two ways to test this; the first is to just press F5; that will launch the function as a local service, and you can use PostMan or similar to test it; the alternative is to deploy to the cloud. If you choose the latter, then your local.settings.json will not come with you, so you’ll need to add an app setting:

Remember to save this setting, otherwise, you’ll get an error saying that it can’t find your setting, and you won’t be able to work out why – ask me how I know!

Now, if you run a test …

You should be able to see your table updated (shown here using Storage Explorer):

Summary

We now have a working Azure function that updates a storage table with some basic information. In the next post, we’ll create a GCP service that pipes all this information into BigTable and then link the two systems.

Footnotes

* Remember, all the guys in Santa suits are just helpers.
** That brandy you leave out really hits the spot!
*** I just Googled this – it seems a bit low to me, too.

References

https://docs.microsoft.com/en-us/azure/azure-functions/functions-how-to-use-azure-function-app-settings#manage-app-service-settings

https://anthonychu.ca/post/azure-functions-update-delete-table-storage/

https://stackoverflow.com/questions/44961482/how-to-specify-output-bindings-of-azure-function-from-visual-studio-2017-preview

A C# Programmer’s Guide to Google Cloud Pub Sub Messaging

The Google Cloud Platform provides a Publish / Subscriber system called ‘PubSub’. In this post I wrote a basic guide on setting up RabbitMQ, and here I wrote about ActiveMQ. In this post I wrote about using the Azure messaging system. Here, I’m going to give an introduction to using the GCP PubSub system.

Introduction

The above systems that I’ve written about in the past are fully featured (yes, including Azure) message bus systems. While the GCP offering is a Message Bus system of sorts, it is definitely lacking some of the features of the other platforms. I suppose this stems from the fact that, in the GCP case, it serves a specific purpose, and is heavily geared toward that purpose.

Other messaging systems do offer the Pub / Sub model. The idea being that you create a topic, and anyone that’s interested can subscribe to the topic. Once you’ve subscribed, you’re guaranteed* to get at least one delivery of the published message. You can also, kind of, simulate a message queue, because more than one subscriber can take message from a single subscription.

Pre-requisites

If you want to follow along with the post, you’ll need to have a GCP subscription, and a GCP project configured.

Topics

In order to set-up a new topic, we’re going to navigate to the PubSub menu in the console (you may be prompted to Enable PubSub when you arrive).

As you can see, you’re inundated with choice here. Let’s go for “Create a topic”:

Cloud Shell

You’ve now created a topic; however, that isn’t the only way that you can do this. Google are big on using the Cloud Shell; and so you can create a topic using that; in order to do so, you select the cloud shell icon:

Once you get the cloud shell, you can use the following command**:

gcloud beta pubsub topics create "test"

Subscriptions and Publishing

You can publish a message now if you like; either from the console:

Or from the Cloud Shell:

gcloud beta pubsub topics publish "test" "message"

Both will successfully publish a message that will get delivered to all subscribers. The problem is that you haven’t created any subscribers yet, so it just dissipates into the ether***.

You can see there are no subscriptions, because the console tells you****:

Let’s create one:

Again, you can create a subscription from the cloud shell:

gcloud beta pubsub topics subscriptions create --topic "test" "mysubscription"

So, we now have a subscription, and a message.

Consuming messages

In order to consume messages in this instance, let’s create a little cloud function. I’ve previously written about creating these here. Instead of creating a HTTP trigger, this time, we’re going to create a function that reacts to something on a cloud Pub/Sub topic:

Select the relevant topic; the default code just writes the test out to the console; so that’ll do:

/**
 * Triggered from a message on a Cloud Pub/Sub topic.
 *
 * @param {!Object} event The Cloud Functions event.
 * @param {!Function} The callback function.
 */
exports.subscribe = function subscribe(event, callback) {
  // The Cloud Pub/Sub Message object.
  const pubsubMessage = event.data;

  // We're just going to log the message to prove that
  // it worked.
  console.log(Buffer.from(pubsubMessage.data, 'base64').toString());

  // Don't forget to call the callback.
  callback();
};

So, now we have a subscription:

Let’s see what happens when we artificially push a message to it.

If we now have a look at the Cloud Function, we can see that something has happened:

And if we select “View Logs”, we can see what:

It worked! Next…

Create Console App

Now we have something that will react to a message, let’s try and generate one programmatically, in C# from a console app. Obviously the first thing to do is to install a NuGet package that isn’t past the beta stage yet:

Install-Package Google.Cloud.PubSub.V1 -Pre

Credentials

In this post I described how you might create a credentials file. You’ll need to do that again here (and, I think anywhere that you want to access GCP from outside of the cloud).

In APIs & Services, select “Create credentials”:

Again, select a JSON file:

The following code publishes a message to the topic:

static async Task Main(string[] args)
{
    Environment.SetEnvironmentVariable(
        "GOOGLE_APPLICATION_CREDENTIALS",
        Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "my-credentials-file.json"));
 
    GrpcEnvironment.SetLogger(new ConsoleLogger());
 
    // Instantiates a client
    PublisherClient publisher = PublisherClient.Create();
 
    string projectId = "test-project-123456";
    var topicName = new TopicName(projectId, "test");
 
    SimplePublisher simplePublisher = await SimplePublisher.CreateAsync(topicName);
    string messageId = await simplePublisher.PublishAsync("test message");
    await simplePublisher.ShutdownAsync(TimeSpan.FromSeconds(15));
}

And we can see that message in the logs of the cloud function:

Permissions

Unless you choose otherwise, the service account will look something like this:

The Editor permission that it gets by default is a sort of God permission. This can be fine-grained by removing that, and selecting specific permissions; in this case, Pub/Sub -> Publisher. It’s worth bearing in mind that as soon as you remove all permissions, the account is removed, so try to maintain a single permission (project browser seems to be suitably innocuous).

Footnotes

* Google keeps messages for up to 7 days, so the guarantee has a time limit.

** gcloud may need to be initialised. If it does then:

gcloud init
gcloud components install beta

*** This is a big limitation. Whilst all topic subscriptions in other systems do work like this, in those systems, you have the option of a queue – i.e. a place for messages to live that no-one is listening for.

**** If you create a subscription in the Cloud Shell, it will not show in the console until you F5 (there may be a timeout, but I didn’t wait that long). The problem here is that F5 messes up the shell window.

References

https://cloud.google.com/pubsub/docs/reference/libraries

https://cloud.google.com/iam/docs/understanding-roles

Google Cloud Datastore – Setting up a new Datastore and accessing it from a console application

Datastore is a NoSql offering from Google. It’s part of their Google Cloud Platform (GCP). The big mind shift, if you’re used to a relational database is to remember that each row (although they aren’t really rows) in a table (they aren’t really tables) can be different. The best way I could think about it was a text document; each line can have a different number of words, numbers and symbols.

However, just because it isn’t relational, doesn’t mean you don’t have to consider the structure; in fact, it actually seems to mean that there is more onus on the designer to consider what and where the data will be used.

Pre-requisites

In order to follow this post, you’ll need an account on GCP, and a Cloud Platform Project.

Set-up a New Cloud Datastore

The first thing to do is to set-up a new Datastore:

Zones

The next step is to select a Zone. The big thing to consider, in terms of cost and speed is to co-locate your data where possible. Specifically with data, you’ll incur egress charges (that is, you’ll be charged as your data leaves its zone), so your zone should be nearby, and co-located with anything that accesses it. Obviously, in this example, you’re accessing the data from where your machine is located, so pick a zone that is close to where you live.

In Britain, we’re in Europe-West-2:

Entities and Properties

The next thing is to set-up new entity. As we said, an entity is loosely analogous to a table.

Now we have an entity, the entity needs some properties. This, again, is loosely analogous to a field; if the fields were not required to be consistent throughout the table. I’m unsure how this works behind the scenes, but it appears to simply null out the columns that have no value; I suspect this may be a visual display thing.

You can set the value (as above), and then query the data, either in a table format (as below):

Or, you can use a SQL like syntax (as below).

Credentials

In order to access the datastore from outside the GCP, you’ll need a credentials file. You;ll need to start off in the Credentials screen:

In this instance, we’ll set-up a service account key:

This creates the key as a json file:

The file should looks broadly like this:

{
  "type": "service_account",
  "project_id": "my-project-id",
  "private_key_id": "private_key_id",
  "private_key": "-----BEGIN PRIVATE KEY-----\nkeydata\n-----END PRIVATE KEY-----\n",
  "client_email": "[email protected]t.com",
  "client_id": "clientid",
  "auth_uri": "https://accounts.google.com/o/oauth2/auth",
  "token_uri": "https://accounts.google.com/o/oauth2/token",
  "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
  "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/my-project-id%40appspot.gserviceaccount.com"
}

Keep hold of this file, as you’ll need it later.

Client Library

There is a .Net client library provided for accessing this functionality from your website or desktop app. What we’ll do next is access that entity from a console application. The obvious first step is to create one:

Credentials again

Remember that credentials file I said to hang on to; well now you need it. It needs to be accessible from your application; there’s a number of ways to address this problem, and the one that I’m demonstrating here is probably not a sensible solution in real life, but for the purpose of testing, it works fine.

Copy the credentials file into your project directory and include it in the project, then, set the properties to:

Build Action: None
Copy to Output Directory: Copy if Newer

GCP Client Package

You’ll need to install the correct NuGet package:

Install-Package Google.Cloud.Datastore.V1

Your Project ID

As you use the GCP more, you’ll come to appreciate that the project ID is very important. You’ll need to make a note of it (if you can’t find it, simply select Home from the hamburger menu):

The Code

All the pieces are now in place, so let’s write some code to access the datastore:

Environment.SetEnvironmentVariable(
    "GOOGLE_APPLICATION_CREDENTIALS",
    Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "my-credentials-file.json"));
 
GrpcEnvironment.SetLogger(new ConsoleLogger());
 
// Your Google Cloud Platform project ID
string projectId = "my-project-id";
 
DatastoreClient datastoreClient = DatastoreClient.Create();
 
DatastoreDb db = DatastoreDb.Create(projectId, "TestNamespace", datastoreClient);

string kind = "MyTest";

string name = "newentitytest3";
KeyFactory keyFactory = db.CreateKeyFactory(kind);
Key key = keyFactory.CreateKey(name);
 
var task = new Entity
{
    Key = key,
    ["test1"] = "Hello, World",
    ["test2"] = "Goodbye, World",
    ["new field"] = "test"
};
 
using (DatastoreTransaction transaction = db.BeginTransaction())
{                
    transaction.Upsert(task);
    transaction.Commit();
}

If you now check, you should see that your Datastore has been updated:

There’s a few things to note here; the first is that you will need to select the right Namespace and Kind. Namespace defaults to [default], and so you won’t see your new records until you select that.

When things go wrong

The above instructions are deceptively simple; however, getting this example working was, by no means, straight-forward. Fortunately, when you have a problem with GCP and you ask on StackOverflow, you get answered by Jon Skeet. The following is a summary of an error that I encountered.

System.InvalidOperationException

System.InvalidOperationException: ‘The Application Default Credentials are not available. They are available if running in Google Compute Engine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.’

The error occurred on the BeginTransaction line.

The ConsoleLogger above isn’t just there for show, and does give some additional information; in this case:

D1120 17:59:00.519509 Grpc.Core.Internal.UnmanagedLibrary Attempting to load native library “C:\Users\pmichaels.nuget\packages\grpc.core\1.4.0\lib\netstandard1.5../..\runtimes/win/native\grpc_csharp_ext.x64.dll” D1120 17:59:00.600298 Grpc.Core.Internal.NativeExtension gRPC native library loaded successfully. E1120 17:59:02.176461 0 C:\jenkins\workspace\gRPC_build_artifacts\platform\windows\workspace_csharp_ext_windows_x64\src\core\lib\security\credentials\plugin\plugin_credentials.c:74: Getting metadata from plugin failed with error: Exception occured in metadata credentials plugin.

It turns out that the code was failing somewhere in here. Finally, with much help, I managed to track the error down to being a firewall restriction.

References

https://cloud.google.com/datastore/docs/reference/libraries

https://cloud.google.com/datastore/docs/concepts/entities?hl=en_US&_ga=2.55488619.-171635733.1510158034

https://cloud.google.com/dotnet/

https://github.com/GoogleCloudPlatform/dotnet-docs-samples

https://developers.google.com/identity/protocols/application-default-credentials

https://cloud.google.com/datastore/docs/concepts/overview

https://cloud.google.com/datastore/docs/reference/libraries

Getting Started With iOS for a C# Programmer – Part Three – Moving an Object

In the first post in this series, I described how to set-up an account and create a very basic program. I also laid out a timeline for getting a very basic game up and running on an iPhone. In this post, I follow on from here where I describe how to run an application in the simulator.

This time, we’re going to create a different type of project and start the game. In the first post, I described the target of this post to make an object move across the screen; we’re actually going to go slightly further and allow a small amount of control.

Create a new Game Project

This time, when we create a new project, we’ll select “Game”:

The next screen is vaguely familiar; for this particular example, we want Swift as the language, and SpriteKit as the game framework:

This should create you a new project that looks something like this:

It also gives you some code… which causes the program to do this out of the box:

Clear Default Game Code

There’s quite a lot of code generated by default in order to create this magnificent application. A lot of it needs deleting; let’s start with the text. In GameScene.sks, select the “Hello, World” text and delete it:

Most of the rest of the code is in GameScene.Swift; you can simply clear that; although, unless your target is to create the exact same game as this series of posts describes, you might want to comment out what’s there, so you can revisit later.

Create Object

The first step to making an object move across the screen is to create an object. Our object is going to be a rectangle:

    func createScene(){
        self.physicsBody = SKPhysicsBody(edgeLoopFrom: self.frame)
        self.physicsBody?.isDynamic = false
        self.physicsBody?.affectedByGravity = false
        
        //self.physicsWorld.contactDelegate = self
        self.backgroundColor = SKColor(red: 255.0/255.0, green: 255.0/255.0, blue: 255.0/255.0, alpha: 1.0)
        
        let player = CreatePlayer()
        self.addChild(player)

    }
    
    // Returns SKShapeNode
    func CreatePlayer() -> SKShapeNode {
        
        let playerSize = CGSize(width: 100, height: 50)
        
        let player = SKShapeNode(rectOf: playerSize)
        player.position = CGPoint(x:self.frame.midX, y:self.frame.midY)
        player.strokeColor = SKColor(red: 0.0/255.0, green: 0.0/255.0, blue: 0.0/255.0, alpha: 1.0)
        
        player.physicsBody = SKPhysicsBody(rectangleOf: playerSize)
        
        player.physicsBody?.affectedByGravity = false
        player.physicsBody?.isDynamic = true
            
        return player
    }

There are two functions in the code above. ‘createScene()’ establishes the game environment: background colour, physics rules, etc. This, in turns, calls ‘CreatePlayer()’ and adds the result to ‘self’.

A Quick not on `Self`

What is ‘self’? As you can see from the class definition, the GameScene class inherits from SKScene:

class GameScene: SKScene {

In C# terms, ‘self’ translates to ‘this’, so we’re referring to the instance of a class that inherits from SKScene.

We now have a couple of functions to set the game up, but haven’t yet called them; that’s where the didMove event comes in. This fires after the scene is loaded and rendered:

override func didMove(to view: SKView) {
            createScene()
    }

And we have our box:

Move Game Object

But the target wasn’t just to draw something on the screen, but to move it. Let’s make it move when you touch the screen. First, we’ll need to declare a couple of variables to hold the player object and the speed they are travelling (BTW, there’s a reason the variable is called playerSpeed and not speed – speed is a variable that exists in the SpriteKit namespace – although you wouldn’t know it from the error!):

class GameScene: SKScene {
    
    private var player : SKShapeNode?
    private var playerSpeed : CGFloat? = 0

There’s a series of events that handle when the user touches the screen, but let’s use ‘touchesBegan’ for now:

    override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
        playerSpeed! += CGFloat(1.0)
    }

Finally, the update function allows the change of game variables before they are rendered:

    override func update(_ currentTime: TimeInterval) {
        // Called before each frame is rendered
        player?.position.x += playerSpeed!
    }

As far as I can establish, anything inside the SKScene object is automatically rendered each frame by virtue of being there. So, if we were relating this to a C# game engine such as MonoGame, the Draw function is automated.

References

http://sweettutos.com/2017/03/09/build-your-own-flappy-bird-game-with-swift-3-and-spritekit/

https://developer.apple.com/documentation/spritekit/skscene/1519607-didmove

https://developer.apple.com/library/content/documentation/Swift/Conceptual/Swift_Programming_Language/Methods.html

Debugging Recommendations Engine

Here I wrote about how to set-up and configure the MS Azure recommendations engine.

One thing that has become painfully apparent while working with recommendations is how difficult it is to work out what has gone wrong when you don’t get any recommendations. The following is a handy check-list for the next time this happens to me… so others may, or may not find this useful*:

1. Check the model was correctly generated

Once you have produced a recommendations model, you can access that model by simply navigating to it. The url is in the following format:

{recommendations uri}/ui

For example:

https://pcmrecasd4zzx2asdf.azurewebsites.net/ui

This gives you a screen such as this:

The status (listed in the centre of the screen) tells you whether the build has finished and, if so, whether it succeeded or not.

If the build has failed, you can select that row and drill into, and find out why.

In the following example, there is a reference in the usage data, to an item that is not in the catalogue.

Other reasons that the model build may fail include invalid, corrupt or missing data in either file.

2. Check the recommendation in the interface

In order to exclude other factors in your code, you can manually interrogate the model directly by simply clicking on the “Score” link above; you will be presented with a screen such as this:

In here, you can request direct recommendations to see how the model behaves.

3. Volume

If you find that your score is consistently returning as zero, then the issue may be with the volume of usage data that you have provided. 1k** rows of usage data is the sort of volume you should be dealing with; this statistic was based on a catalogue of around 20 – 30 products.

4. Distribution

The number of users matters – for the above figures, a minimum of 15** users was necessary to get any scores back. If the data sample is across too small a user base, it won’t return anything.

Footnotes

* Although this post is written by me, and is for my benefit, I stole much of its content from wiser work colleagues.

** Arbitrary values – your mileage may vary.

Google Cloud Platform – Using Cloud Functions

In this post and this post I wrote about how you might create a basic Azure function. In this post, I’ll do the same thing using the Google Cloud Platform (GCP).

Google Cloud Functions

This is what google refer to as their serverless function offering. The feature is, at the time of writing, still in Beta, and only allows JavaScript functions (although after the amount of advertising they have been doing on Dot Net Rocks, I did look for a while for the C# switch).

As with many of the Google features, you need to enable functions for the project that you’re in:

Once the API is enabled, you’ll have the opportunity to create a new function; doing so should present you with this screen:

Since the code has to be in JavaScript, you could use the following (which is a cut down version of the default code):

exports.helloWorld = function helloWorld(req, res) {
    
    console.log(req.body.message);
    res.status(200).send('Test function');

};

Once you create the function, you’ll see it spin for a while before it declares that you’re ready to go:

Testing

In order to test that this works, simply navigate to the URL given earlier on:

References

https://cloud.google.com/functions/docs/writing/http