I just upgraded one of my older bots to v3 and I have to embarrassingly admit it took me way too much to figure something out.
All v3 bots, unlike the v1 bots get an AppId and Password that you need to set to your configuration file so it can communicate with the online bot. This is also true for the bots you deploy on the new Azure Bot Service
1 2 3 4 5
<appSettings> <!-- update these with your appid and one of your appsecret keys--> <addkey="MicrosoftAppID"value="7346****8c7f8270bf63" /> <addkey="MicrosoftAppPassword"value="g***B" /> </appSettings>
However, once you add those values in your config, you might notice that your bot no longer works in the emulator, returning an 500 Internal Server Error exception. Here’s how to fix that:
RenderTargetBitmap is a very useful UWP class if you are looking to render a piece of XAML to an image.
Some use cases that come to mind are saving a signature from an InkCanvas or maybe even more common to generate an image for a live tile.
The visual tree
One of the most important prerequisites is that the control you are trying to render is part of the Visual Tree. That means that you cannot just instantiate a UIElement and pass it in to the RenderTargetBitmap, but rather you need to pick it up directly from the page. That is easy enough when you’re rendering an image when your app is in the foreground but becomes a bit more tricky when you are rendering the image in your background task.
I recently started a new job and was assigned to a project where there was quite a bit of existing code already. So as I was starting to get lost in the complexities of the solution I found NDepend and that helped me get my bearings and get productive really quick.
So what is NDepend? It’s a static analyzer, a dependency explorer, a code health monitor and much more. I’ll go though some of the features in more detail in the article.
My first contact with it was getting the license and installing the software. It’s not the most straight forward process definitely not a one-click install but I think the target demographic for the application should not just have any issues . You have the option to use the app as a Visual Studio Extension or as a standalone application outside of Visual Studio. I definitely recommend the Visual Studio extension since it feels a lot nicer to see and use the reports in context.
Once installed and with a solution open, just use the NDepend menu to analyze your solution. When it’s done you will get a very nice interactive dashboard with all sorts of information about your solution ranging from lines of code to code coverage to code rules. It’s really nicely done and a great starting point. Bonus points for the web view of the dashboard that you or your boss (?) can explore outside of Visual Studio if need be.
I ran into what seems a really strange bunch of errors the other day while trying to build store packages for a UWP application.
There were about 15k (yes, 15 thousand!!!) errors that only showed up when building store packages. The application would compile and run fine in debug, but as soon as you try to build the package it would error out. See below:
A few days ago I woke up to an email that just made my day. It was to let me know that I have been awarded the Microsoft MVP status for Windows Development for my community contributions over the past year.
Needless to say I am deeply honored and proud to have received the award and I will definitely continue
Here’s a pic (will replace with a better quality one soon) of the award package that I received in the mail. Now my goal is to add to those year rings until there’s no more room on the plaque.
Before Windows 8.1 there was no way to run UI code in a Background Task. This was possible before, in Windows Phone Silverlight Task Agents but did not work in Windows 8 store applications. With Windows 8.1 and UWP after that, there’s a simple solution to do this - XamlRenderingBackgroundTask. This is a special kind of background task that can run on the UI thread and so it allows you to, for example, render an image for your live tile.
However, there’s a very small caveat that comes with it and I got burned twice by it wasting a couple of hours debugging.
When you implement a “normal” background task your code looks something like this:
You implement your interface and everything works. However, if you then switch to a XamlRenderingBackgroundTask it stops working. I usually figure it out after a couple of hours of frustration, hopefully this will help you shorten that time. The trick is that the XamlRenderingBackgroundTask already implements the Run() method of the IBackgroundTask and exposes a virtual method OnRun() where you write your code. So while your code won’t work it won’t even throw a compilation error making things perfect to debug. Here’s the proper code to implement your background task:
Collaboration is a big value proposition that comes with HoloLens. Not only can you enable truly immersive experiences and enable scenarios never-before possible but you can do this in a collaborative environment letting multiple people see and create at the same time.
One of the easiest way to enable this is to use an component of the open-source toolkit for HoloLens called HoloToolkit (you can find it on GitHub ). The sharing component in that framework is an extensible multi-platform, multi-architecture network communication engine that is nicely encapsulated and exposed to Unity to make sharing as easy as possible.
Before we dive in to the components of it I want to explain exactly how this works. Probably the most important thing to mention is that a separate instance of the app runs on all devices that share the experience. The actual functionality and 3D objects are not streamed over the network but rendered by each device and only the interactions are streamed.
A while ago I created a tool for the HoloLens device to calculate the interpupilary distance (distance between your eyes pupils) to properly configure the device. You can read all the details about that on the GitHub page.
In summary, I used the Microsoft Cognitive Services Face Detection API to detect facial landmarks and calculate the distance based on them. One of the most interesting parts of the app was to load an image and annotate it with the locations of the faces detected in the image, just like you see in the online demos.