A few days ago I woke up to an email that just made my day. It was to let me know that I have been awarded the Microsoft MVP status for Windows Development for my community contributions over the past year.
Needless to say I am deeply honored and proud to have received the award and I will definitely continue
Here’s a pic (will replace with a better quality one soon) of the award package that I received in the mail. Now my goal is to add to those year rings until there’s no more room on the plaque.
Before Windows 8.1 there was no way to run UI code in a Background Task. This was possible before, in Windows Phone Silverlight Task Agents but did not work in Windows 8 store applications. With Windows 8.1 and UWP after that, there’s a simple solution to do this - XamlRenderingBackgroundTask. This is a special kind of background task that can run on the UI thread and so it allows you to, for example, render an image for your live tile.
However, there’s a very small caveat that comes with it and I got burned twice by it wasting a couple of hours debugging.
When you implement a “normal” background task your code looks something like this:
You implement your interface and everything works. However, if you then switch to a XamlRenderingBackgroundTask it stops working. I usually figure it out after a couple of hours of frustration, hopefully this will help you shorten that time. The trick is that the XamlRenderingBackgroundTask already implements the Run() method of the IBackgroundTask and exposes a virtual method OnRun() where you write your code. So while your code won’t work it won’t even throw a compilation error making things perfect to debug. Here’s the proper code to implement your background task:
Collaboration is a big value proposition that comes with HoloLens. Not only can you enable truly immersive experiences and enable scenarios never-before possible but you can do this in a collaborative environment letting multiple people see and create at the same time.
One of the easiest way to enable this is to use an component of the open-source toolkit for HoloLens called HoloToolkit (you can find it on GitHub ). The sharing component in that framework is an extensible multi-platform, multi-architecture network communication engine that is nicely encapsulated and exposed to Unity to make sharing as easy as possible.
Before we dive in to the components of it I want to explain exactly how this works. Probably the most important thing to mention is that a separate instance of the app runs on all devices that share the experience. The actual functionality and 3D objects are not streamed over the network but rendered by each device and only the interactions are streamed.
A while ago I created a tool for the HoloLens device to calculate the interpupilary distance (distance between your eyes pupils) to properly configure the device. You can read all the details about that on the GitHub page.
In summary, I used the Microsoft Cognitive Services Face Detection API to detect facial landmarks and calculate the distance based on them. One of the most interesting parts of the app was to load an image and annotate it with the locations of the faces detected in the image, just like you see in the online demos.
In today’s part I will analyze the way we use the APIs by looking at the requests / response structures and checking the performance (time and size). We compared the efficiency and accuracy in the part 2 so now we will just focus on how easy it is to consume the response rather than how accurate it is.
Most of the HoloLens development guidance you’ll find out there, will tell you that when you build you HoloLens app from Unity you must select D3D and not XAML in the project build settings.
You might think (as I did for a while) that it’s very obvious why and that this makes the app run in 3D rather than a 2D app. But that’s not true, the 2D vs 3D aspect of a HoloLens app built with Unity lies in the Virtual Reality Enabled flag that you can find in the Player Settings window. You need to set that to Windows Holographic for your app to behave like a nice volumetric, 3D application.
So, what’s the difference between D3D and XAML you ask ?
I’ve been using Visual Studio daily for a very long time now and I think I learnt a thing or two about getting the most out of it and be as productive as I can be. In today’s post I will present a few of them that I find myself using a lot and I find extremely useful but are less known (based on my observations)
The tips below apply to Visual Studio 2015, but some of them are available in older versions as well.
After a short introduction and high level features comparison in part 1, let’s look at the APIs in action and see how they perform on some images. Make sure to check part 3 for an analysis of the API itself.
We’ll run some images thought the interactive demo sites of each service and list the results
I recently got interested in exploring the new wave of Computer Vision APIs out there, see what they can do and how to use them in my apps. I obviously knew about Microsoft’s Cognitive Services (Computer Vision API) former Project Oxford but I wanted to see who else is in the space. I picked the two most obvious competitors and decided to do a quick comparison: Google Cloud Vision, IBM Watson Visual Recognition. So I explored all the options and tried to compare them at a high level.
–Update: Check part 2 for a more in depth feature comparison and part 3 for an analysis of the API itself.