Archive for the ‘Microsoft’ Category

I log a lot of event data in Microsoft AppCenter. Thousands of events per day. Unfortunately, AppCenter has a hard limit of 200 distinct custom events for their event data. So, when you look at that great event filter they have, you’re not actually seeing all the data. HOWEVER, that does NOT mean AppCenter is limited to a scant 200 events. Hardly. Behind the scenes, AppCenter is powered by Application Insights. As long as you have storage available, you can keep all your events in App Insights. The problems I ran into were:

  1. I didn’t know Kusto, or KQL, the query language used by App Insights, and
  2. It’s not obvious how to access all the metadata I log for each event.

What’s metadata, you may ask? Well, it’s the additional data you log alongside an event. In AppCenter, the term for metadata is Properties, but I digress. For example, if you’re logging an Software Update Success or Failure, you may also include metadata about a product ID, model number, firmware version, and so forth. Finding the event data is easy. Finding and reporting on the metadata is not so straightforward.

So, dear reader, I have an example query for you. You can copy & paste this into your App Insights query editor and be good to go.

So here’s the query I used to extract a summary of how many software updates succeeded, for specific model numbers, with a count by model number:

customEvents
| where name in ("Software Update: Succeeded")
| extend jsonData = tostring(parse_json(tostring(customDimensions.Properties)).['Model Number'])
| where jsonData in ("model1", "model2", "model3", "model4")
| summarize count() by jsonData

So what’s happening here? Let me explain:

customEvents

This is the table App Insights stores all events sent by AppCenter.
| where name in ("Software Update Succeeded")

This filters those events by the event name.
| extend jsonData = tostring(parse_json(tostring(customDimensions.Properties)).['Model Number'])

This converts the metadata field – aka customDimensions.Properties – from JSON, extracts a particular metadata field – in this case, Model Number – and then returns that metadata value as a string.
| where jsonData in ("model1", "model2", "model3", "model4")

This is a simple filter query. I found if I wanted to get all model numbers I could simply use the following, though your mileage may vary:

| where jsonData !in ("")

And then finally:
| summarize count() by jsonData

This takes the results, by model number, and summarizes with counts. App Insights will even automatically generate a graph for you!

Refining the query above, and as you use more extend keywords to extract more data, you may want to use a more meaningful variable name than jsonData 😁 For example, here’s a more robust query I wrote, identifying the unique counts by total number of users performing the action:

customEvents
| where name in ("Software Update: Succeeded")
| where timestamp > ago(30d)
| extend modelNumber = tostring(parse_json(tostring(customDimensions.Properties)).['Model Number'])
| extend modelName = case(
modelNumber == "model1", "Headphones",
modelNumber == "model2", "Soundbars",
modelNumber == "model3", "Powered Speakers",
"n/a")
| where modelNumber !in ("")
| summarize count() by user_Id, modelName
| summarize count() by modelName

The two summarize filters let me break things down by total number of users with total number of product models. Here you can see how I used proper variable names and used the other useful customEvents data. This also helps me get data similar to the cool, useful graphs AppCenter shows on their dashboard.

You can do a lot more with this data, but it should get you started. I hope it helps you as it did me.

Additional tip:

Application Insights only stores data for 30 days by default. If you want to retain and report on your events beyond that timeframe, make sure you update your App Insights instance settings.

This past week, I had the opportunity to write an app to handle scanning in patients for CoViD testing in our city, Fishers, Indiana. Fishers is one of the top 10 places to raise a family in the United States. Reinforcing that reputation, the city is providing free CoViD testing to all 90,000 residents, free of charge. The process is simple:

  1. Register online via an assessment
  2. Receive a patient ID
  3. Scan in at appointment, assign test, and take test
  4. Receive answers within 3 business days

Seems simple, right? Our city isn’t using the painful “touch the back of your eyeball” technique, either. Instead, it’s at the front of the nasal cavity – simple, painless, and you get results. It also ensures contact tracing where there are infections, so as a community we can help prevent the spread of disease.

But can we do it?

The problem is, roughly a week ago, Step 3 didn’t exist. The patient ID was from a survey with a QR code. The kit didn’t have any identifier whatsoever – it’s just one of many bagged kits in a box. And this data was being manually entered by other entities running similar tests. To give you an idea of how prone to error this process is, consider the patient ID looks like this:

Sample Survey ID

Any chance for typos much? And that doesn’t solve the kit identifier problem – because there isn’t one.

Box of Tests
Figure: A box of test kits.

Solving the Problem

So, last Saturday, I received an email from our City’s IT Director. He wanted to know if I knew anyone who could marry the patients’ ID with the kit ID for proper tracking. If you know me, this type of project is right up my alley. This project screamed MOBILE APP! I said “I’ll do it!” It would be like a weekend hackathon!

Most cities aren’t offering CoViD testing, and not a single state is offering such a service to all residents. Fishers is different – we’re a “vibrant, entrepreneurial city,” as our awesome forward-thinking mayor, Scott Fadness, often exclaims. His team takes novel approaches to addressing community needs, and this was no different.

Where there is testing, as I understand it, it’s with PCs and handheld scanners. What a nightmare it must be to keep such a setup running – with a laptop, with software that’s manually installed, and patients scanned with a handheld that can’t scan through glass. I’ve worked with those setups before – the technology issues are a huge PITA. Let alone deploying updates in any timely fashion!

We decided at the get-go a mobile app would be the right approach. When asked about barcode scanners, I explained “We don’t need one.” The built-in cameras on modern day cell phones are more than capable of scanning any type of barcode, QR code, and so forth. As an added bonus, any cell phone we use will have Internet connectivity, with Wi-Fi as a backup. The beauty of it is one single device, one single app, everything self-contained and easy to use.

The beauty of it is one single device, one single app, everything self-contained and easy to use.

The Requirements

After our initial discussion, and a bit of back and forth, this was the decided-upon workflow:

  1. Scan the Patient QR Code and Kit ID.
  2. Come up with a Kit Scanning Process. We decided on CODE39 barcodes that would be printed beforehand so technology wouldn’t be an issue each day.
  3. Store the Patient ID and Kit ID for later retrieval. This ended up being “uploaded” to the survey itself, ensuring we didn’t need to store any PII, and didn’t have to build a back-end data store. Small favors…

And this was the mockup:

2020-04-25 Fishers CoViD App
Figure: Whiteboarded app.

Draft Napkin
Figure: Rough brain dump with ideas.

Initially, we talked about generating the Kit barcode on the mobile device, then printing it to a wireless printer in the testing bay. This certainly seemed possible. However, the more I thought about it, the more I realized we could simply pre-print the labels and affix them as needed. This would provide some obvious benefits:

  • We wouldn’t have to come up with a mobile printing solution, which can be tricky, and is not a simple problem to solve cross-platform.
  • We’d keep a printer breakdown out of the picture, ensuring technology didn’t “get in our way”

The key is to get the patients in, tested, and out as efficiently as possible. The simpler we kept the process, the less could go wrong. So, on-demand printing was eliminated, and we’d simply pre-print labels instead. They’d be affixed to the test kit and then assigned to a patient.

Common Needs of Mobile Apps

Determining the app dvelopment approach, I took into consideration every mobile app I’ve built generally has had three primary needs that must be addressed:

    1. Where does data come from? Usually this is an API or the user. If an API doesn’t exist, considerable time is necessary to build one.
    2. Where does data go? Also usually an API for some data store. Same API issue.
    3. How does the user interact with data? The app is useless if the user can’t figure it out and, if possible, enjoy the experience. This can have design cost and time impacts.

For Need 1, we knew we had a QR Code. BUT how would we know it’s valid? How would we get the patient data? Well, it just so happened the survey provider had an API. Sweet! We hopped on a call with them and they provided an API  key and documentation. That’s how API access should work! They even provided a RegEx to validate the scanned patient ID, which internally to then was actually just a survey ID.

What about the kits? We decided to use a CODE39 barcode font and print on standard Avery labels. We came up with a standard naming and numbering convention, a RegEx to validate, and would pre-print them – a few hundred per day. This would ensure the labels were verifiable after scanning, and that printing wouldn’t be an issue each day. We’d take care of technology problems beforehand – such as printer issues – so they wouldn’t impact patient processing.

Barcodes on Avery Labels
Figure: A combination of Excel to generate the labels following the naming convention, plus mail merge to insert into on the off-the-shelf labels.

OK, now for Need 2… We can get the data, but we have to store it somewhere. Initially, we thought about building a separate back-end. The survey provider, Qualtrics, explained we could send the data back to their system and store it with the initial survey. Well, that was much better! No new storage API development was needed, as they already had the infrastructure in place. Building a new, solid, secure API in a short period of time would have been no small task.

For Need 3, the user experience, I borrowed my grandfather’s phrase: It must require a PH, D. Push Here, Dummy. I wanted “three taps and a send,” as follows:

  1. Scan the Patient – Once scanned, look the user up and verify they exist, showing confirmation details on the screen.
  2. Scan the Kit – Ensure the barcode matches the expected format.
  3. Confirm & Submit – Prompt to ensure patient details, such as name and postal code, have been verified, then confirm the entry has been saved.

It must require a PH, D. Push Here, Dummy.

That’s it – no chance for typos, and verification at every step, helping things go right. Little animations would show when a step had been completed, and scans could be done in any order.

Picked up 5600 Labels
Figure: Texting back and forth, getting our bases covered.

Xamarin As The Dev Stack

We’re building a mobile app here, and we may need it for multiple platforms. iOS – both iPhone and iPad, Android, and perhaps even Windows. Building each platform separately would take a lot of time to complete – time we didn’t have. It was April 25, we needed to be testing by April 27, and we were going live May 1.

The right choice was Xamarin with Xamarin.Forms – Microsoft’s cross-platform mobile framework. It’s similar to React Native, but you have full access to the underlying platform APIs. That’s because you’re building a real native app, not an interpreted overlay. With a single solution, we could build the iOS, Android, and UWP (Windows) apps, with 90% or more code sharing. I’m a Xamarin certified mobile developer, so this was going to be fun!

Solution Explorer
Figure: The Xamarin app in Visual Studio.

First Draft

Within a few hours, I had an alpha version of the app running. It was rough, and didn’t have the best UI, but it was scanning and talking with the Qualtrics API. Hey, once the base stuff’s working, you can make it look pretty!

The app consisted of a few core components:

  • App Service – Managing any processes the app needed completed, such as retrieving patient survey details, updating a patient survey record, verifying scanned code formatting, and so forth.
  • API Service – Talking back and forth with the Qualtrics API.
  • Analytics Service – Tracking aspects of the application, such as kit scan successes and failures, any exceptions that may occur, and so forth, so we can improve the app over time.

Build 1
Figure: Build 1 of the app. I first tested with my Android device, then rolled out to iOS, testing on both an iPhone and iPad.

I also had to ensure scanning went off without a hitch. After all, that’s what this app is doing – getting all the data quickly, then tying it together. I configured the scanning solution to only scan QR codes when scanning the patient ID, and only CODE39 barcodes when scanning kits. That way, if the codes were next to each other, the tech wouldn’t scan the wrong item and cause confusion. Remember, the technicians are medical techs, not computer techs – any technology problem could stop the patient processing flow. We needed to ensure the technology didn’t get in the way. You do that by testing thoroughly, and keeping the end user in mind.

Testing Scanning
Figure: QR code and CODE39 barcodes for testing.

Final Approach and User Experience

Once the UI was working, I added final touches to the UX to make it friendly and easy to use:

  1. When a technician successfully scanned a patient, the information would appear and a green checkmark would animate in. This would clearly indicate that step was completed. If there was an issue with the verification, they would be prompted to scan again. Optionally, they could manually enter the patient ID, which would follow the same validation steps.
  2. When a kit was scanned, another green checkmark would animate in, signifying that step, too, was complete.
  3. Once both steps had been completed, the technician would clearly understand the two greens meant “good to go” and could submit the patient data. They would be prompted to confirm they had verified all patient data and everything on the screen was correct.
  4. Once patient data was successfully transmitted, a confirmation dialog would appear. Upon dismissal, the UI would animate to the reset state, making it clear it’s OK to proceed to the next patient.

Devices, TestFlight, and Apple, Oh My!

So the app was in a good state. How were we going to get it on devices? This isn’t an app that we want in the App Store. It’s not a general consumer app – at least, not yet. TestFlight to the rescue! We’d push it to Apple’s TestFlight app testing service, then enroll all the iOS devices. That would ensure that, as we tweaked the app, we could quickly push the updates without any messy manual installs.

For those that have deployed iOS apps before, you know this isn’t a fast process. The first version of any app into TestFlight must be reviewed by Apple. I uploaded the first version and waited…

Roughly a day later, Apple rejected the app. BUT WHY? Well, we hadn’t provided any sample QR codes or bar codes to scan, so they rejected it. UGH! Really?? I didn’t even know that was a testing requirement! You learn something new every day… So I sent a URL with some examples to test with, as you can’t upload files to the testing site, and waited. Hours later, thankfully, Apple approved the app for testing!

App Store Test Rejection
Figure: Apple beta review rejection email.

We enrolled the various iPhones and iPads in TestFlight and we were able to start testing. Other than a restriction with SSL over the City’s network, which was quickly resolved, we had our devices ready to go. Not bad for under 48 hours!! 

Note that, once an app is in TestFlight, additional builds go through almost instantly. This ensured we could tweak as needed and not wait 24+ hours to validate each time.

TestFlight Versions
Figure: We could release updates with velocity after the initial approval.

Rolling It Out – Dress Rehearsal

We wanted to make sure the app worked without a hitch. A day before release, we had a “dress rehearsal.” Everyone would be ready for the testing, and we’d introduce the app. It’s a small part, but it ties it all together. Tracy, the I.T. Director, and I had been testing in earnest prior to this time, and we were feeling pretty good about it.

That morning, I walked the users through the app, joking about the PH, D requirement. Prior to my arrival, they had been testing the process on one of our citizens, who must have been quite tired from all the work:

Test Patient

The pressing questions were:

  • Can we scan a QR code through glass, so a citizen doesn’t have to roll down their window? Yes, unless it’s super tinted, which would be illegal anyway.
  • What if we can’t scan the code? This wasn’t an issue, except for a QR code variant issue discussed later, and manual entry was supported just in case.
  • What if Internet access goes down? We had cellular backup on all devices.
  • How will we apply the barcode to the kit? Peel and stick, then scan. We scan after removal from the main sheet so we don’t scan the wrong code. In a later version we added a prompt when the scanned patient had already been through the process.
  • What if the QR code is used more than once? This wasn’t an issue, as the name and appointment time wouldn’t match.

Here are a few photos from that morning – that was a lot of fun!

20200430_09065520200430_09070720200430_090718Box of TestsCheck InP100 MasksPresenting the AppRoad to TestTest CompleteTest Vial

    Day 1!

    Day 1 was here, and real citizens were about to get tested. I slept well the night before, knowing we had tested thoroughly. We only had one hiccup: The QR code in the email was different than the QR code on the confirmation website. This was causing validation errors, as the website QR code’s patient ID couldn’t be found in the system. Not an app issue, but that doesn’t matter.

    Couldn't find Patient ID
    Figure: Ruh-roh! The QR codes weren’t matching between different sources. Yellow alert!

    The survey provider quickly addressed the issue and we were good to go. It wasn’t a big deal – they provided a website to manually enter the patient ID for scanning with a properly generated QR code, and it barely impacted patients. Day 1 was a rousing success!

    2020-04-30 In the Field
    Figure: The app in use!

    Wrapping Up

    Going from no-app to app being used with patients in less than one week was an incredible experience. It feels great to help the community during this period of uncertainty. I’m grateful our city wanted to make the process as seamless as possible, using technology to help things go right, and providing the opportunity me to assist. I’m thankful that, once again, Xamarin was a great solution.

    I’ll probably have a technology walk-through in the near future – I didn’t want to concentrate on the underpinnings of the application for this article. I’ll leave that to a discussion for the Indy Xamarin Meetup.

    Final VersionFigure: The final app, with my info scanned in.

    I recently started in the Fishers Youth Mentoring Initiative, and my mentee is a young man in junior high who really likes lizards. He showed me photos of them on his iPad, photos of his pet lizard, and informed me of many lizard facts. He’s also a talented sketch artist – showcasing many drawings of Pokemon, lizards and more. Oh, yeah, he’s also into computers and loves his iPad.

    Part of the mentoring program is to help with school, being there as they adjust to growing up, and both respecting and encouraging their interests.

    It just so happens that he had a science project coming up. He wasn’t sure what to write about. His pet lizard recently had an attitude shift, and he figured it was because it wasn’t getting as much food week over week. Changing that, he realized its attitude changed. So, he wanted to cover that somehow.

    Seeing his interest in lizards, drawing, and computers I asked if we could combine them. I suggested we build an app, a “Reptile Tracker,” that would help us track reptiles, teach others about them, and show them drawings he did. He loved the idea.

    Planning

    We only get to meet for 30 minutes each week. So, I gave him some homework. Next time we meet, “show me what the app would look like.” He gleefully agreed.

    One week later, he proudly showed me his vision for the app:

    Reptile Tracker

    I said “Very cool.” I’m now convinced “he’s in” on the project, and taking it seriously.

    I was also surprised to learn that my expectations of “show me what it would look like” were different from what I received from someone both much younger than I and with a different world view. To him, software may simply be visualized as an icon. In my world, it’s mockups and napkin sketches. It definitely made me think about others’ perceptions!

    True to software engineer and sort-of project manager form, I explained our next step was to figure out what the app would do. So, here’s our plan:

    1. Identify if there are reptiles in the photo.
    2. Tell them if it’s safe to pick it up, if it’s venomous, and so forth.
    3. Get one point for every reptile found. We’ll only support Lizards, Snakes, and Turtles in the first version.

    Alright, time for the next assignment. My homework was to figure out how to do it. His homework was to draw up the Lizard, Snake, and Turtle that will be shown in the app.

    Challenge accepted!

    I quickly determined a couple key design and development points:

    • The icon he drew is great, but looks like a drawing on the screen. I think I’ll need to ask him to draw them on my Surface Book, so they have the right look. Looks like an opportunity for him to try Fresh Paint on my Surface Book.
    • Azure Cognitive Services, specifically their Computer Vision solution (API), will work for this task. I found a great article on the Xamarin blog by Mike James. I had to update it a bit for this article, as the calls and packages are a bit different two years later, but it definitely pointed me in the right direction.

    Writing the Code

    The weekend came, and I finally had time. I had been thinking about the app the remainder of the week. I woke up early Saturday and drew up a sketch of the tracking page, then went back to sleep. Later, when it was time to start the day, I headed over to Starbucks…

    20181105_083756

    I broke out my shiny new MacBook Pro and spun up Visual Studio Mac. Xamarin Forms was the perfect candidate for this project – cross platform, baby! I started a new Tabbed Page project, brought over some code for taking photos with the Xam.Plugin.Media plugin and resizing them, and the beta Xamarin.Essentials plugin for eventual geolocation and settings support. Hey, it’s only the first week Smile

    Side Note: Normally I would use my Surface Book. This was a chance for me to seriously play with MFractor for the first time. Yay, even more learning this weekend!

    Now that I had the basics in there, I created the interface for the Image Recognition Service. I wanted to be able to swap it out later if Azure didn’t cut it, so Dependency Service to the rescue! Here’s the interface:

    using System.IO;
    using System.Threading.Tasks;
    using Microsoft.Azure.CognitiveServices.Vision.ComputerVision.Models;
     
    namespace ReptileTracker.Services
    {
         public interface IImageRecognitionService
         {
             string ApiKey { get; set; }
             Task<ImageAnalysis> AnalyzeImage(Stream imageStream);
         }
    }
    

    Now it was time to check out Mike’s article. It made sense, and was close to what I wanted. However, the packages he referenced were for Microsoft’s Project Oxford. In 2018, those capabilities have been rolled into Azure as Azure Cognitive Services. Once I found the updated NuGet package – Microsoft.Azure.CognitiveServices.Vision.ComputerVision – and made some code tweaks, I ended up with working code.

    A few developer notes for those playing with Azure Cognitive Services:

    • Hold on to that API key, you’ll need it
    • Pay close attention to the Endpoint on the Overview page – you must provide it, otherwise you’ll get a 403 Forbidden

    image

    And here’s the implementation. Note the implementation must have a parameter-less constructor, otherwise Dependency Service won’t resolve it.

    using Microsoft.Azure.CognitiveServices.Vision.ComputerVision;
    using Microsoft.Azure.CognitiveServices.Vision.ComputerVision.Models;
    using System;
    using System.Collections.Generic;
    using System.Diagnostics;
    using System.IO;
    using System.Threading.Tasks;
    using ReptileTracker.Services;
    using Xamarin.Forms;
     
    [assembly: Dependency(typeof(ImageRecognitionService))]
    namespace ReptileTracker.Services
    {
        public class ImageRecognitionService : IImageRecognitionService
        {
            /// <summary>
            /// The Azure Cognitive Services Computer Vision API key.
            /// </summary>
            public string ApiKey { get; set; }
     
            /// <summary>
            /// Parameterless constructor so Dependency Service can create an instance.
            /// </summary>
            public ImageRecognitionService()
            {
     
            }
     
            /// <summary>
            /// Initializes a new instance of the <see cref="T:ReptileTracker.Services.ImageRecognitionService"/> class.
            /// </summary>
            /// <param name="apiKey">API key.</param>
            public ImageRecognitionService(string apiKey)
            {
     
                ApiKey = apiKey;
            }
     
            /// <summary>
            /// Analyzes the image.
            /// </summary>
            /// <returns>The image.</returns>
            /// <param name="imageStream">Image stream.</param>
            public async Task<ImageAnalysis> AnalyzeImage(Stream imageStream)
            {
                const string funcName = nameof(AnalyzeImage);
     
                if (string.IsNullOrWhiteSpace(ApiKey))
                {
                    throw new ArgumentException("API Key must be provided.");
                }
     
                var features = new List<VisualFeatureTypes> {
                    VisualFeatureTypes.Categories,
                    VisualFeatureTypes.Description,
                    VisualFeatureTypes.Faces,
                    VisualFeatureTypes.ImageType,
                    VisualFeatureTypes.Tags
                };
     
                var credentials = new ApiKeyServiceClientCredentials(ApiKey);
                var handler = new System.Net.Http.DelegatingHandler[] { };
                using (var visionClient = new ComputerVisionClient(credentials, handler))
                {
                    try
                    {
                        imageStream.Position = 0;
                        visionClient.Endpoint = "https://eastus.api.cognitive.microsoft.com/";
                        var result = await visionClient.AnalyzeImageInStreamAsync(imageStream, features);
                        return result;
                    }
                    catch (Exception ex)
                    {
                        Debug.WriteLine($"{funcName}: {ex.GetBaseException().Message}");
                        return null;
                    }
                }
            }
     
        }
    }

    And here’s how I referenced it from my content page:

    pleaseWait.IsVisible = true;
    pleaseWait.IsRunning = true;
    var imageRecognizer = DependencyService.Get<IImageRecognitionService>();
    imageRecognizer.ApiKey = AppSettings.ApiKey_Azure_ImageRecognitionService;
    var details = await imageRecognizer.AnalyzeImage(new MemoryStream(ReptilePhotoBytes));
    pleaseWait.IsRunning = false;
    pleaseWait.IsVisible = false;
    
    var tagsReturned = details?.Tags != null 
                       && details?.Description?.Captions != null 
                       && details.Tags.Any() 
                       && details.Description.Captions.Any();
    
    lblTags.IsVisible = true; 
    lblDescription.IsVisible = true; 
    
    // Determine if reptiles were found. 
    var reptilesToDetect = AppResources.DetectionTags.Split(','); 
    var reptilesFound = details.Tags.Any(t => reptilesToDetect.Contains(t.Name.ToLower()));  
    
    // Show animations and graphics to make things look cool, even though we already have plenty of info. 
    await RotateImageAndShowSuccess(reptilesFound, "lizard", details, imgLizard);
    await RotateImageAndShowSuccess(reptilesFound, "turtle", details, imgTurtle);
    await RotateImageAndShowSuccess(reptilesFound, "snake", details, imgSnake);
    await RotateImageAndShowSuccess(reptilesFound, "question", details, imgQuestion);

    That worked like a champ, with a few gotchas:

    • I would receive a 400 Bad Request if I sent an image that was too large. 1024 x 768 worked, but 2000 x 2000 didn’t. The documentation says the image must be less than 4MB, and at least 50×50.
    • That API endpoint must be initialized. Examples don’t always make this clear. There’s no constructor that takes an endpoint address, so it’s easy to miss.
    • It can take a moment for recognition to occur. Make sure you’re using async/await so you don’t block the UI Thread!

    Prettying It Up

    Before I get into the results, I wanted to point out I spent significant time prettying things up. I added animations, different font sizes, better icons from The Noun Project, and more. While the image recognizer only took about an hour, the UX took a lot more. Funny how that works.

    Mixed Results

    So I was getting results. I added a few labels to my view to see what was coming back. Some of them were funny, others were accurate. The tags were expected, but the captions were fascinating. The captions describe the scene as the Computer Vision API sees it. I spent most of the day taking photos and seeing what was returned. Some examples:

    • My barista, Matt, was “a smiling woman working in a store”
    • My mom was “a smiling man” – she was not amused

    Most of the time, as long as the subjects were clear, the scene recognition was correct:

    Screenshot_20181105-080807

    Or close to correct, in this shot with a turtle at Petsmart:

    tmp_1541385064684

    Sometimes, though, nothing useful would be returned:

    Screenshot_20181105-080727

    I would have thought it would have found “White Castle”. I wonder if it won’t show brand names for some reason? They do have an OCR endpoint, so maybe that would be useful in another use case.

    Sometimes, even though I thought an image would “obviously” be recognized, it wasn’t:

    Screenshot_20181105-081207

    I’ll need to read more about how to improve accuracy, if and whether that’s even an option.

    Good thing I implemented it with an interface! I could try Google’s computer vision services next.

    Next Steps

    We’re not done with the app yet – this week, we will discuss how to handle the scoring. I’ll post updates as we work on it. Here’s a link to the iOS beta.

    Some things I’d like to try:

    • Highlight the tags in the image, by drawing over the image. I’d make this a toggle.
    • Clean up the UI to toggle “developer details”. It’s cool to show those now, but it doesn’t necessarily help the target user. I’ll ask my mentee what he thinks.

    Please let me know if you have any questions by leaving a comment!

    Want to learn more about Xamarin? I suggest Microsoft’s totally awesome Xamarin University. All the classes you need to get started are free.

    Update 2018-11-06:

    • The tags are in two different locations – Tags and Description.Tags. Two different sets of tags are in there, so I’m now combining those lists and getting better results.
    • I found I could get color details. I’ve updated the accent color surrounding the photo. Just a nice design touch.

    I ran into this issue this week. I would define the Source as a URL and then, nothing…

    It turns out, with FFImageLoading, an indispensable Xamarin.Forms plugin available via NuGet, you must also set the ErrorPlaceholder property if loading your image from a URL. That did the trick – images started loading perfectly!

    I’ve reported what I think is a bug. I haven’t yet looked at their code.

    Here’s an example of how I fixed it:

    Working Code:

    <ff:CachedImage 
        Source="{Binding ModelImageUrl}"
        ErrorPlaceholder="icon_errorloadingimage"
        DownsampleToViewSize="True"
        RetryCount="3"
        RetryDelay="1000"
        WidthRequest="320"
        HeightRequest="240"
        Aspect="AspectFit"
        HorizontalOptions="Center" 
        VerticalOptions="Center" />
    

    Non-Working Code, note the missing ErrorPlaceholder property:

    <ff:CachedImage 
        Source="{Binding ModelImageUrl}"
        DownsampleToViewSize="True"
        RetryCount="3"
        RetryDelay="1000"
        WidthRequest="320"
        HeightRequest="240"
        Aspect="AspectFit"
        HorizontalOptions="Center" 
        VerticalOptions="Center" />
    

    I hope that helps others with the same issue. Enjoy!

    As part of my .NET 301 Advanced class at the fantastic Eleven Fifty Academy, I teach Xamarin development. It’s sometimes tough, as every student has a different machine. Some have PCs, others have Macs running Parallels or Bootcamp. Some – many – have Intel processors, while others have AMD. I try to recommend students come to the class with Intel processors, due to the accelerated Android emulator benefit Intel’s HAXM – Hardware Acceleration Manager – provides. This blog entry is a running list of how I’ve solved getting the emulator running on so many machines. I hope the list helps you, too.

    This list will be updated from time to time, as I find new bypasses. At this time, the list is targeted primarily for machines with an Intel processor. Those with AMD and Windows are likely stuck with the ARM emulators. Umm, sorry. I welcome solutions, there, too, please!

    Last updated: December 4, 2017

    Make sure you’re building from a path that’s ultimate length is less than 248 characters.

    That odd Windows problem of long file paths bites us again here. Many new developers tend to build under c:\users\username\documents\Visual Studio 2017\projectname. Add to that the name of the project, and all its subfolders, and the eventual DLLs and executable are out of reach of various processes.

    I suggest in this case you have a folder such as c:\dev\ and build your projects under there. That’s solved many launch and compile issues.

    Use the x86 emulators.

    If you have an Intel processor, then use the x86 and x64 based emulators instead of ARM. They’re considerably faster, as long as you have a) an Intel processor with virtualization abilities, which I believe all or most modern Intel processors do, and b) Intel’s HAXM installed.

    Make sure VTI-X / Hardware Virtualization is enabled.

    Intel’s HAXM – which you can download here – won’t run if the processor’s virtualization is disabled. You need to tackle this in the BIOS. That varies per machine. Many devices seem to chip with the feature disabled. Enabling it will enable HAXM to work.

    Uninstall the Mobile Development with .NET Workload using the Visual Studio Installer, and reinstall.

    Yes, I’m suggesting Uninstall + Reinstall. This has worked well in the class. Go to Start, then Visual Studio Installer, and uncheck the box. Restart afterwards. Then reinstall, and restart.

    Mobile Development Workload Screenshot

    Use the Xamarin Android SDK Manager.

    The Xamarin team has built a much better Android SDK Manager than Google’s. It’s easy to install HAXM, update Build Tools and Platforms, and so forth. Use it instead and dealing with tool version conflicts may be a thing of the past.

    Make sure you’re using the latest version of Visual Studio.

    Bugs are fixed all the time, especially with Xamarin. Make sure you’re running the latest bits and your problems may be solved.

    Experiment with Hyper-V Enabled and Disabled.

    I’ve generally had issues with virtualization when Hyper-V is enabled. If you’re having trouble with it enabled, try with it disabled.

    To enable/disable Hyper-V, go to Start, then type Windows Features. Choose Turn Windows Features On or Off. When the selection list comes up, toggle the Hyper-V feature accordingly.

    Note: You may need to disable Windows Device Guard before you can disable Hyper-V. Thanks to Matt Soucoup for this tip.

    Use a real device.

    As a mobile developer, you should never trust the emulators to reflect the real thing. If you can’t get the emulators to work, and even if you can, you have the option of picking up an Android phone or tablet for cheap. Get one and test with it. If you’re not clear on how to set up Developer Mode on Android devices, it’s pretty simple. Check out Google’s article on the subject.

    Try Xamarin’s HAXM and emulator troubleshooting guide.

    The Xamarin folks have a guide, too.

    If all else fails, use the ARM processors.

    This is your last resort. If you don’t have an Intel processor, or a real device available, use the ARM processors. They’re insanely slow. I’ve heard there’s an x86 emulator from AMD, yet it’s supposedly only available for Linux. Not sure why that decision was made, but moving on… 🙂

    Have another solution?

    Have a suggestion, solution, or feature I’ve left out? Let me know and I’ll update!

     

    My latest Visual Studio extension is now available! Get it here: 2017, 2015

    So what is CodeLink?

    Getting two developers on the same page over chat can be time consuming. I work remote, so I can’t just walk to someone’s desk. I often find myself saying “go to this file” and “ok, now find function <name>”. Then I wait. Most of the time it’s only 10-20 seconds lost. If it’s a common filename or function, it takes longer. Even then, mistakes can be made.

    So I asked myself: Self, wouldn’t it be great if I could send them a link to the place / cursor location in the solution I’m at? Just like a web link?

    CodeLink was born.

    So here’s what a CodeLink looks like:

    codelink://[visualstudio]/[AurisIdeas.Common.Security\AurisIdeas.Common.Security.csproj]/[ParameterFactory.cs]/[9]

    I would simply share that CodeLink with a fellow developer. They’d select “Open CodeLink…” in VisualStudio, paste it in, and be brought to that line of code in that project. No more walking them through it, much less waiting.

    Technically, the format is:

    codelink://[Platform]/[Project Unique Path]/[File Unique Path]/[LineNumber]

    What’s it good for?

    Other than what I’ve suggested, and what you come up with, I’m thinking CodeLink will help you, teams, teachers, and students with:

    • Include CodeLinks in bugs, code reviews to highlight what needs to be reviewed
    • Share CodeLinks on Git repos, pointing to specific code examples, points of interest, and so forth
    • Share CodeLinks with students so they can continue referring / reviewing useful code

    So what’s next?

    When I was thinking of the link format, I figured I may end up extending this to VS Code and other editors in the future. After all, not everyone uses VS. Why not XCode, Visual Studio Mac, Atom? So, I added a type identifier.

    As always, I look forward to your feedback. Hit me up on Twitter or LinkedIn.

     

    I recently deployed an Azure Cloud Service with Remote Desktop enabled. However, when I went to connect to it on port 3389, the server refused the connection. I remember there was something I had to do, but I had never written down the steps. So, here’s what you need to do, in case you’re looking 🙂

    Note: I use Remote Desktop Connection Manager, a.k.a. RDCMan, also from Microsoft, instead of the standard RDC client. I feel it’s much better, more configurable, and great if you need to work with many remote desktops. I’ve love to know why they don’t include it in Windows!

    Step 1: Find your Cloud Service and Slot in Azure Portal

    In Azure, find your Cloud Service. Also select the slot to which you want to connect, such as Production or Staging.

    Step 2: Select the Roles and Instances option

    You’ll see it on the left.

    Step 3: Choose the item to which you want to connect and click Connect

    For example, your web role instance. This will download an .RDP file. You can double-click this file to connect. Ooh! Neat!

    Step 4: If you’re using RDCMan…

    To connect with RDCMan, you’ll need to grab the Cookie: something something something string out of the RDP file. Open it in Notepad++, or your text editor of choice, and grab that value. Ignore the s: text.

    In RDCMan, for the VM, add the string under Connection Settings tab in the Load balance config textbox.

    Step 5: You’re Connected!

    Enjoy!

     

     

    Have you been wondering how to access the Azure Multi-Factor Authentication Settings in the Azure Classic Portal without first having to create an Azure account? I figured this out a few days ago, having an Office 365 tenant, and wanting to use the EMS and Azure Active Directory Premium features. Following Microsoft’s instructions, it said to go to the Azure Classic Portal. The problem is, Office 365 doesn’t include an Azure subscription, it just includes Azure Active Directory, which you manage through the “modern” Azure portal. Unfortunately, the Trusted IPs and MFA capabilities are managed through the Azure Classic Portal, which you can’t directly access without an Azure subscription.

    So, here’s what you do:

    1. Go to portal.office.com
    2. Click Admin to open the admin tools
    3. In the search box, type MFA
    4. Select the Multi-factor authentication search result
    5. Click the link to open the Manage multi-factor authentication link
    6. There you go – manage MFA in your Azure AD to your heart’s content!

     

    Want to learn all about Xamarin and how you can use it, while not spending most of your time watching code scroll by in a video? I figured there was room for an explainer without being a close-captioner for a code tutorial. Enjoy my latest video!

    https://www.youtube.com/edit?video_id=AhvofyQCrhw

    From the description, along with links:

    Have you been considering Xamarin for your cross-platform mobile app? This presentation will help.

    In this non-code-heavy presentation, we’ll discuss:

    * What is Xamarin
    * Development Environment Gotchas
    * Creating a Sample To Do List App without writing any code
    * Reviewing a real Xamarin app that’s “in the wild”
    * Review native, platform-specific integrations
    * Discuss gotchas when using Xamarin, and mobile apps in general
    * Answer audience questions

    Why not code-heavy? Because there are many examples you can follow online. This presentation will provide valuable information you can consider while reviewing the myriad of tutorials available to you with a simple Bing or Google search, or visiting Pluralsight, Microsoft Virtual Academy, or Xamarin University.

    If you have any feedback, please leave in the comments, or ask me on Twitter: @Auri

    Here are the links relevant for this presentation:

    Slides: https://1drv.ms/p/s!AmKBMqPeeM_1-Zd7Y…

    Indy.Code Slides with Cost and Performance Figures: https://1drv.ms/p/s!AmKBMqPeeM_1-JZR4…
    (you can find the Indy.Code() presentation on my YouTube channel)

    Google Xamarin vs. Native iOS with Swift/Objective C vs. Android with Java Performance Article: https://medium.com/@harrycheung/mobil…

    Example code for push notifications, OAuth Twitter/Facebook/Google authentication, and more: https://github.com/codemillmatt/confe…

    Link to Microsoft Dev Essentials for $30/month free Azure credit and free Xamarin training: https://aka.ms/devessentials

    Microsoft Virtual Academy Multi-Threading Series: https://mva.microsoft.com/en-us/train…

     

    I’m continuing my resolution to record as many of my programming and technical presentations as possible. I recently spoke at the inaugural Indy.Code() conference. It was excellent, with an incredible speaker line-up. I hope they, too, post some of their presentations online!

    Watch the Video on YouTube

    From the synopsis:

    Should you write your app “native” or use a “cross-platform” solution like React Native, Xamarin, or NativeScript? The new wave of native-cross-compiling solutions provide significant cost savings, code reuse opportunities, and lower technical debt. Does wholly native, per platform development, still play a role in future mobile development? Let’s discuss together.

    In this presentation, we’ll discuss:

    • The growth of native, hybrid, and cross-platform mobile development solutions
    • Cost analysis of multiple native and cross-platform apps
    • Considerations for each native and cross-platform solution
    • Lessons learned

    Slides are available here: https://t.co/5iLhEoEfen

    If you have any questions, I’m happy to answer them! Please email me or ask on Twitter.