Posts Tagged ‘Xamarin Forms’

A friend reached out today and asked “Hey, I need my splash screen to be themed based on whether Dark Mode is selected on the Android device.” I had never done that before, so I was curious how it should be done.

Dark Mode is new to Android 10, a.k.a. Android Q. Not as tasty, but hey, gotta ship something. It has a range of benefits, such as lower energy consumption, looking cool, and frustrating developers without budget for theme-aware apps.

It turns out, after some sleuthing, it’s relatively straightforward.

First, this article assumes you’re following the “splash screen as a theme” approach, which you can learn more about here. The example is for Xamarin.Forms, but the same approach applies to regular Android development.

Basically, you have a “splashscreen” style, and you set it as your app’s theme in the Android manifest. Then, you “swap” to the real theme in MainActivity. For example, what I use in an app, located in resources/values/styles.xml:

  <!-- Splash screen style -->   <style name="splashscreen" parent="Theme.AppCompat.DayNight">     <item name="android:windowBackground">@drawable/splash</item>     <item name="android:windowNoTitle">true</item>     <item name="android:windowIsTranslucent">false</item>     <item name="android:windowIsFloating">false</item>     <item name="android:backgroundDimEnabled">true</item>   </style>

Note my drawable. I want a different drawable for my dark vs. light (normal) theme. Here’s what is different:

  • The parent is now Theme.AppCompat.DayNight
  • I’ve added a different set of drawable folders for the Dark theme images. These are the same folder names, with -night appended to the end:

different drawable-night folders

In this example, I haven’t yet added the other folder variations, but you get the point.

The theme swap code in MainActivity is as follows:

protected override void OnCreate(Bundle savedInstanceState)
{     TabLayoutResource = Resource.Layout.Tabbar;     ToolbarResource = Resource.Layout.Toolbar;     // Swap back to the normal app theme. We used Splash so we didn't have to create a special activity.      // Cute hack, and better approach.     // Idea from URL: https://xamarinhelp.com/creating-splash-screen-xamarin-forms/     Window.RequestFeature(WindowFeatures.ActionBar);     SetTheme(Resource.Style.MainTheme);

That’s all there is to it. If Dark mode is enabled, the splash.png from the -night folder will be used, otherwise the normal image will takes its rightful place.

If you have any questions, please hit me up in the comments.

Special thanks to this StackOverflow article for the –night hint.

More info on Android Dark Theme can be found here.

This past week, I had the opportunity to write an app to handle scanning in patients for CoViD testing in our city, Fishers, Indiana. Fishers is one of the top 10 places to raise a family in the United States. Reinforcing that reputation, the city is providing free CoViD testing to all 90,000 residents, free of charge. The process is simple:

  1. Register online via an assessment
  2. Receive a patient ID
  3. Scan in at appointment, assign test, and take test
  4. Receive answers within 3 business days

Seems simple, right? Our city isn’t using the painful “touch the back of your eyeball” technique, either. Instead, it’s at the front of the nasal cavity – simple, painless, and you get results. It also ensures contact tracing where there are infections, so as a community we can help prevent the spread of disease.

But can we do it?

The problem is, roughly a week ago, Step 3 didn’t exist. The patient ID was from a survey with a QR code. The kit didn’t have any identifier whatsoever – it’s just one of many bagged kits in a box. And this data was being manually entered by other entities running similar tests. To give you an idea of how prone to error this process is, consider the patient ID looks like this:

Sample Survey ID

Any chance for typos much? And that doesn’t solve the kit identifier problem – because there isn’t one.

Box of Tests
Figure: A box of test kits.

Solving the Problem

So, last Saturday, I received an email from our City’s IT Director. He wanted to know if I knew anyone who could marry the patients’ ID with the kit ID for proper tracking. If you know me, this type of project is right up my alley. This project screamed MOBILE APP! I said “I’ll do it!” It would be like a weekend hackathon!

Most cities aren’t offering CoViD testing, and not a single state is offering such a service to all residents. Fishers is different – we’re a “vibrant, entrepreneurial city,” as our awesome forward-thinking mayor, Scott Fadness, often exclaims. His team takes novel approaches to addressing community needs, and this was no different.

Where there is testing, as I understand it, it’s with PCs and handheld scanners. What a nightmare it must be to keep such a setup running – with a laptop, with software that’s manually installed, and patients scanned with a handheld that can’t scan through glass. I’ve worked with those setups before – the technology issues are a huge PITA. Let alone deploying updates in any timely fashion!

We decided at the get-go a mobile app would be the right approach. When asked about barcode scanners, I explained “We don’t need one.” The built-in cameras on modern day cell phones are more than capable of scanning any type of barcode, QR code, and so forth. As an added bonus, any cell phone we use will have Internet connectivity, with Wi-Fi as a backup. The beauty of it is one single device, one single app, everything self-contained and easy to use.

The beauty of it is one single device, one single app, everything self-contained and easy to use.

The Requirements

After our initial discussion, and a bit of back and forth, this was the decided-upon workflow:

  1. Scan the Patient QR Code and Kit ID.
  2. Come up with a Kit Scanning Process. We decided on CODE39 barcodes that would be printed beforehand so technology wouldn’t be an issue each day.
  3. Store the Patient ID and Kit ID for later retrieval. This ended up being “uploaded” to the survey itself, ensuring we didn’t need to store any PII, and didn’t have to build a back-end data store. Small favors…

And this was the mockup:

2020-04-25 Fishers CoViD App
Figure: Whiteboarded app.

Draft Napkin
Figure: Rough brain dump with ideas.

Initially, we talked about generating the Kit barcode on the mobile device, then printing it to a wireless printer in the testing bay. This certainly seemed possible. However, the more I thought about it, the more I realized we could simply pre-print the labels and affix them as needed. This would provide some obvious benefits:

  • We wouldn’t have to come up with a mobile printing solution, which can be tricky, and is not a simple problem to solve cross-platform.
  • We’d keep a printer breakdown out of the picture, ensuring technology didn’t “get in our way”

The key is to get the patients in, tested, and out as efficiently as possible. The simpler we kept the process, the less could go wrong. So, on-demand printing was eliminated, and we’d simply pre-print labels instead. They’d be affixed to the test kit and then assigned to a patient.

Common Needs of Mobile Apps

Determining the app dvelopment approach, I took into consideration every mobile app I’ve built generally has had three primary needs that must be addressed:

    1. Where does data come from? Usually this is an API or the user. If an API doesn’t exist, considerable time is necessary to build one.
    2. Where does data go? Also usually an API for some data store. Same API issue.
    3. How does the user interact with data? The app is useless if the user can’t figure it out and, if possible, enjoy the experience. This can have design cost and time impacts.

For Need 1, we knew we had a QR Code. BUT how would we know it’s valid? How would we get the patient data? Well, it just so happened the survey provider had an API. Sweet! We hopped on a call with them and they provided an API  key and documentation. That’s how API access should work! They even provided a RegEx to validate the scanned patient ID, which internally to then was actually just a survey ID.

What about the kits? We decided to use a CODE39 barcode font and print on standard Avery labels. We came up with a standard naming and numbering convention, a RegEx to validate, and would pre-print them – a few hundred per day. This would ensure the labels were verifiable after scanning, and that printing wouldn’t be an issue each day. We’d take care of technology problems beforehand – such as printer issues – so they wouldn’t impact patient processing.

Barcodes on Avery Labels
Figure: A combination of Excel to generate the labels following the naming convention, plus mail merge to insert into on the off-the-shelf labels.

OK, now for Need 2… We can get the data, but we have to store it somewhere. Initially, we thought about building a separate back-end. The survey provider, Qualtrics, explained we could send the data back to their system and store it with the initial survey. Well, that was much better! No new storage API development was needed, as they already had the infrastructure in place. Building a new, solid, secure API in a short period of time would have been no small task.

For Need 3, the user experience, I borrowed my grandfather’s phrase: It must require a PH, D. Push Here, Dummy. I wanted “three taps and a send,” as follows:

  1. Scan the Patient – Once scanned, look the user up and verify they exist, showing confirmation details on the screen.
  2. Scan the Kit – Ensure the barcode matches the expected format.
  3. Confirm & Submit – Prompt to ensure patient details, such as name and postal code, have been verified, then confirm the entry has been saved.

It must require a PH, D. Push Here, Dummy.

That’s it – no chance for typos, and verification at every step, helping things go right. Little animations would show when a step had been completed, and scans could be done in any order.

Picked up 5600 Labels
Figure: Texting back and forth, getting our bases covered.

Xamarin As The Dev Stack

We’re building a mobile app here, and we may need it for multiple platforms. iOS – both iPhone and iPad, Android, and perhaps even Windows. Building each platform separately would take a lot of time to complete – time we didn’t have. It was April 25, we needed to be testing by April 27, and we were going live May 1.

The right choice was Xamarin with Xamarin.Forms – Microsoft’s cross-platform mobile framework. It’s similar to React Native, but you have full access to the underlying platform APIs. That’s because you’re building a real native app, not an interpreted overlay. With a single solution, we could build the iOS, Android, and UWP (Windows) apps, with 90% or more code sharing. I’m a Xamarin certified mobile developer, so this was going to be fun!

Solution Explorer
Figure: The Xamarin app in Visual Studio.

First Draft

Within a few hours, I had an alpha version of the app running. It was rough, and didn’t have the best UI, but it was scanning and talking with the Qualtrics API. Hey, once the base stuff’s working, you can make it look pretty!

The app consisted of a few core components:

  • App Service – Managing any processes the app needed completed, such as retrieving patient survey details, updating a patient survey record, verifying scanned code formatting, and so forth.
  • API Service – Talking back and forth with the Qualtrics API.
  • Analytics Service – Tracking aspects of the application, such as kit scan successes and failures, any exceptions that may occur, and so forth, so we can improve the app over time.

Build 1
Figure: Build 1 of the app. I first tested with my Android device, then rolled out to iOS, testing on both an iPhone and iPad.

I also had to ensure scanning went off without a hitch. After all, that’s what this app is doing – getting all the data quickly, then tying it together. I configured the scanning solution to only scan QR codes when scanning the patient ID, and only CODE39 barcodes when scanning kits. That way, if the codes were next to each other, the tech wouldn’t scan the wrong item and cause confusion. Remember, the technicians are medical techs, not computer techs – any technology problem could stop the patient processing flow. We needed to ensure the technology didn’t get in the way. You do that by testing thoroughly, and keeping the end user in mind.

Testing Scanning
Figure: QR code and CODE39 barcodes for testing.

Final Approach and User Experience

Once the UI was working, I added final touches to the UX to make it friendly and easy to use:

  1. When a technician successfully scanned a patient, the information would appear and a green checkmark would animate in. This would clearly indicate that step was completed. If there was an issue with the verification, they would be prompted to scan again. Optionally, they could manually enter the patient ID, which would follow the same validation steps.
  2. When a kit was scanned, another green checkmark would animate in, signifying that step, too, was complete.
  3. Once both steps had been completed, the technician would clearly understand the two greens meant “good to go” and could submit the patient data. They would be prompted to confirm they had verified all patient data and everything on the screen was correct.
  4. Once patient data was successfully transmitted, a confirmation dialog would appear. Upon dismissal, the UI would animate to the reset state, making it clear it’s OK to proceed to the next patient.

Devices, TestFlight, and Apple, Oh My!

So the app was in a good state. How were we going to get it on devices? This isn’t an app that we want in the App Store. It’s not a general consumer app – at least, not yet. TestFlight to the rescue! We’d push it to Apple’s TestFlight app testing service, then enroll all the iOS devices. That would ensure that, as we tweaked the app, we could quickly push the updates without any messy manual installs.

For those that have deployed iOS apps before, you know this isn’t a fast process. The first version of any app into TestFlight must be reviewed by Apple. I uploaded the first version and waited…

Roughly a day later, Apple rejected the app. BUT WHY? Well, we hadn’t provided any sample QR codes or bar codes to scan, so they rejected it. UGH! Really?? I didn’t even know that was a testing requirement! You learn something new every day… So I sent a URL with some examples to test with, as you can’t upload files to the testing site, and waited. Hours later, thankfully, Apple approved the app for testing!

App Store Test Rejection
Figure: Apple beta review rejection email.

We enrolled the various iPhones and iPads in TestFlight and we were able to start testing. Other than a restriction with SSL over the City’s network, which was quickly resolved, we had our devices ready to go. Not bad for under 48 hours!! 

Note that, once an app is in TestFlight, additional builds go through almost instantly. This ensured we could tweak as needed and not wait 24+ hours to validate each time.

TestFlight Versions
Figure: We could release updates with velocity after the initial approval.

Rolling It Out – Dress Rehearsal

We wanted to make sure the app worked without a hitch. A day before release, we had a “dress rehearsal.” Everyone would be ready for the testing, and we’d introduce the app. It’s a small part, but it ties it all together. Tracy, the I.T. Director, and I had been testing in earnest prior to this time, and we were feeling pretty good about it.

That morning, I walked the users through the app, joking about the PH, D requirement. Prior to my arrival, they had been testing the process on one of our citizens, who must have been quite tired from all the work:

Test Patient

The pressing questions were:

  • Can we scan a QR code through glass, so a citizen doesn’t have to roll down their window? Yes, unless it’s super tinted, which would be illegal anyway.
  • What if we can’t scan the code? This wasn’t an issue, except for a QR code variant issue discussed later, and manual entry was supported just in case.
  • What if Internet access goes down? We had cellular backup on all devices.
  • How will we apply the barcode to the kit? Peel and stick, then scan. We scan after removal from the main sheet so we don’t scan the wrong code. In a later version we added a prompt when the scanned patient had already been through the process.
  • What if the QR code is used more than once? This wasn’t an issue, as the name and appointment time wouldn’t match.

Here are a few photos from that morning – that was a lot of fun!

20200430_09065520200430_09070720200430_090718Box of TestsCheck InP100 MasksPresenting the AppRoad to TestTest CompleteTest Vial

    Day 1!

    Day 1 was here, and real citizens were about to get tested. I slept well the night before, knowing we had tested thoroughly. We only had one hiccup: The QR code in the email was different than the QR code on the confirmation website. This was causing validation errors, as the website QR code’s patient ID couldn’t be found in the system. Not an app issue, but that doesn’t matter.

    Couldn't find Patient ID
    Figure: Ruh-roh! The QR codes weren’t matching between different sources. Yellow alert!

    The survey provider quickly addressed the issue and we were good to go. It wasn’t a big deal – they provided a website to manually enter the patient ID for scanning with a properly generated QR code, and it barely impacted patients. Day 1 was a rousing success!

    2020-04-30 In the Field
    Figure: The app in use!

    Wrapping Up

    Going from no-app to app being used with patients in less than one week was an incredible experience. It feels great to help the community during this period of uncertainty. I’m grateful our city wanted to make the process as seamless as possible, using technology to help things go right, and providing the opportunity me to assist. I’m thankful that, once again, Xamarin was a great solution.

    I’ll probably have a technology walk-through in the near future – I didn’t want to concentrate on the underpinnings of the application for this article. I’ll leave that to a discussion for the Indy Xamarin Meetup.

    Final VersionFigure: The final app, with my info scanned in.

    I recently started in the Fishers Youth Mentoring Initiative, and my mentee is a young man in junior high who really likes lizards. He showed me photos of them on his iPad, photos of his pet lizard, and informed me of many lizard facts. He’s also a talented sketch artist – showcasing many drawings of Pokemon, lizards and more. Oh, yeah, he’s also into computers and loves his iPad.

    Part of the mentoring program is to help with school, being there as they adjust to growing up, and both respecting and encouraging their interests.

    It just so happens that he had a science project coming up. He wasn’t sure what to write about. His pet lizard recently had an attitude shift, and he figured it was because it wasn’t getting as much food week over week. Changing that, he realized its attitude changed. So, he wanted to cover that somehow.

    Seeing his interest in lizards, drawing, and computers I asked if we could combine them. I suggested we build an app, a “Reptile Tracker,” that would help us track reptiles, teach others about them, and show them drawings he did. He loved the idea.

    Planning

    We only get to meet for 30 minutes each week. So, I gave him some homework. Next time we meet, “show me what the app would look like.” He gleefully agreed.

    One week later, he proudly showed me his vision for the app:

    Reptile Tracker

    I said “Very cool.” I’m now convinced “he’s in” on the project, and taking it seriously.

    I was also surprised to learn that my expectations of “show me what it would look like” were different from what I received from someone both much younger than I and with a different world view. To him, software may simply be visualized as an icon. In my world, it’s mockups and napkin sketches. It definitely made me think about others’ perceptions!

    True to software engineer and sort-of project manager form, I explained our next step was to figure out what the app would do. So, here’s our plan:

    1. Identify if there are reptiles in the photo.
    2. Tell them if it’s safe to pick it up, if it’s venomous, and so forth.
    3. Get one point for every reptile found. We’ll only support Lizards, Snakes, and Turtles in the first version.

    Alright, time for the next assignment. My homework was to figure out how to do it. His homework was to draw up the Lizard, Snake, and Turtle that will be shown in the app.

    Challenge accepted!

    I quickly determined a couple key design and development points:

    • The icon he drew is great, but looks like a drawing on the screen. I think I’ll need to ask him to draw them on my Surface Book, so they have the right look. Looks like an opportunity for him to try Fresh Paint on my Surface Book.
    • Azure Cognitive Services, specifically their Computer Vision solution (API), will work for this task. I found a great article on the Xamarin blog by Mike James. I had to update it a bit for this article, as the calls and packages are a bit different two years later, but it definitely pointed me in the right direction.

    Writing the Code

    The weekend came, and I finally had time. I had been thinking about the app the remainder of the week. I woke up early Saturday and drew up a sketch of the tracking page, then went back to sleep. Later, when it was time to start the day, I headed over to Starbucks…

    20181105_083756

    I broke out my shiny new MacBook Pro and spun up Visual Studio Mac. Xamarin Forms was the perfect candidate for this project – cross platform, baby! I started a new Tabbed Page project, brought over some code for taking photos with the Xam.Plugin.Media plugin and resizing them, and the beta Xamarin.Essentials plugin for eventual geolocation and settings support. Hey, it’s only the first week Smile

    Side Note: Normally I would use my Surface Book. This was a chance for me to seriously play with MFractor for the first time. Yay, even more learning this weekend!

    Now that I had the basics in there, I created the interface for the Image Recognition Service. I wanted to be able to swap it out later if Azure didn’t cut it, so Dependency Service to the rescue! Here’s the interface:

    using System.IO;
    using System.Threading.Tasks;
    using Microsoft.Azure.CognitiveServices.Vision.ComputerVision.Models;
     
    namespace ReptileTracker.Services
    {
         public interface IImageRecognitionService
         {
             string ApiKey { get; set; }
             Task<ImageAnalysis> AnalyzeImage(Stream imageStream);
         }
    }
    

    Now it was time to check out Mike’s article. It made sense, and was close to what I wanted. However, the packages he referenced were for Microsoft’s Project Oxford. In 2018, those capabilities have been rolled into Azure as Azure Cognitive Services. Once I found the updated NuGet package – Microsoft.Azure.CognitiveServices.Vision.ComputerVision – and made some code tweaks, I ended up with working code.

    A few developer notes for those playing with Azure Cognitive Services:

    • Hold on to that API key, you’ll need it
    • Pay close attention to the Endpoint on the Overview page – you must provide it, otherwise you’ll get a 403 Forbidden

    image

    And here’s the implementation. Note the implementation must have a parameter-less constructor, otherwise Dependency Service won’t resolve it.

    using Microsoft.Azure.CognitiveServices.Vision.ComputerVision;
    using Microsoft.Azure.CognitiveServices.Vision.ComputerVision.Models;
    using System;
    using System.Collections.Generic;
    using System.Diagnostics;
    using System.IO;
    using System.Threading.Tasks;
    using ReptileTracker.Services;
    using Xamarin.Forms;
     
    [assembly: Dependency(typeof(ImageRecognitionService))]
    namespace ReptileTracker.Services
    {
        public class ImageRecognitionService : IImageRecognitionService
        {
            /// <summary>
            /// The Azure Cognitive Services Computer Vision API key.
            /// </summary>
            public string ApiKey { get; set; }
     
            /// <summary>
            /// Parameterless constructor so Dependency Service can create an instance.
            /// </summary>
            public ImageRecognitionService()
            {
     
            }
     
            /// <summary>
            /// Initializes a new instance of the <see cref="T:ReptileTracker.Services.ImageRecognitionService"/> class.
            /// </summary>
            /// <param name="apiKey">API key.</param>
            public ImageRecognitionService(string apiKey)
            {
     
                ApiKey = apiKey;
            }
     
            /// <summary>
            /// Analyzes the image.
            /// </summary>
            /// <returns>The image.</returns>
            /// <param name="imageStream">Image stream.</param>
            public async Task<ImageAnalysis> AnalyzeImage(Stream imageStream)
            {
                const string funcName = nameof(AnalyzeImage);
     
                if (string.IsNullOrWhiteSpace(ApiKey))
                {
                    throw new ArgumentException("API Key must be provided.");
                }
     
                var features = new List<VisualFeatureTypes> {
                    VisualFeatureTypes.Categories,
                    VisualFeatureTypes.Description,
                    VisualFeatureTypes.Faces,
                    VisualFeatureTypes.ImageType,
                    VisualFeatureTypes.Tags
                };
     
                var credentials = new ApiKeyServiceClientCredentials(ApiKey);
                var handler = new System.Net.Http.DelegatingHandler[] { };
                using (var visionClient = new ComputerVisionClient(credentials, handler))
                {
                    try
                    {
                        imageStream.Position = 0;
                        visionClient.Endpoint = "https://eastus.api.cognitive.microsoft.com/";
                        var result = await visionClient.AnalyzeImageInStreamAsync(imageStream, features);
                        return result;
                    }
                    catch (Exception ex)
                    {
                        Debug.WriteLine($"{funcName}: {ex.GetBaseException().Message}");
                        return null;
                    }
                }
            }
     
        }
    }

    And here’s how I referenced it from my content page:

    pleaseWait.IsVisible = true;
    pleaseWait.IsRunning = true;
    var imageRecognizer = DependencyService.Get<IImageRecognitionService>();
    imageRecognizer.ApiKey = AppSettings.ApiKey_Azure_ImageRecognitionService;
    var details = await imageRecognizer.AnalyzeImage(new MemoryStream(ReptilePhotoBytes));
    pleaseWait.IsRunning = false;
    pleaseWait.IsVisible = false;
    
    var tagsReturned = details?.Tags != null 
                       && details?.Description?.Captions != null 
                       && details.Tags.Any() 
                       && details.Description.Captions.Any();
    
    lblTags.IsVisible = true; 
    lblDescription.IsVisible = true; 
    
    // Determine if reptiles were found. 
    var reptilesToDetect = AppResources.DetectionTags.Split(','); 
    var reptilesFound = details.Tags.Any(t => reptilesToDetect.Contains(t.Name.ToLower()));  
    
    // Show animations and graphics to make things look cool, even though we already have plenty of info. 
    await RotateImageAndShowSuccess(reptilesFound, "lizard", details, imgLizard);
    await RotateImageAndShowSuccess(reptilesFound, "turtle", details, imgTurtle);
    await RotateImageAndShowSuccess(reptilesFound, "snake", details, imgSnake);
    await RotateImageAndShowSuccess(reptilesFound, "question", details, imgQuestion);

    That worked like a champ, with a few gotchas:

    • I would receive a 400 Bad Request if I sent an image that was too large. 1024 x 768 worked, but 2000 x 2000 didn’t. The documentation says the image must be less than 4MB, and at least 50×50.
    • That API endpoint must be initialized. Examples don’t always make this clear. There’s no constructor that takes an endpoint address, so it’s easy to miss.
    • It can take a moment for recognition to occur. Make sure you’re using async/await so you don’t block the UI Thread!

    Prettying It Up

    Before I get into the results, I wanted to point out I spent significant time prettying things up. I added animations, different font sizes, better icons from The Noun Project, and more. While the image recognizer only took about an hour, the UX took a lot more. Funny how that works.

    Mixed Results

    So I was getting results. I added a few labels to my view to see what was coming back. Some of them were funny, others were accurate. The tags were expected, but the captions were fascinating. The captions describe the scene as the Computer Vision API sees it. I spent most of the day taking photos and seeing what was returned. Some examples:

    • My barista, Matt, was “a smiling woman working in a store”
    • My mom was “a smiling man” – she was not amused

    Most of the time, as long as the subjects were clear, the scene recognition was correct:

    Screenshot_20181105-080807

    Or close to correct, in this shot with a turtle at Petsmart:

    tmp_1541385064684

    Sometimes, though, nothing useful would be returned:

    Screenshot_20181105-080727

    I would have thought it would have found “White Castle”. I wonder if it won’t show brand names for some reason? They do have an OCR endpoint, so maybe that would be useful in another use case.

    Sometimes, even though I thought an image would “obviously” be recognized, it wasn’t:

    Screenshot_20181105-081207

    I’ll need to read more about how to improve accuracy, if and whether that’s even an option.

    Good thing I implemented it with an interface! I could try Google’s computer vision services next.

    Next Steps

    We’re not done with the app yet – this week, we will discuss how to handle the scoring. I’ll post updates as we work on it. Here’s a link to the iOS beta.

    Some things I’d like to try:

    • Highlight the tags in the image, by drawing over the image. I’d make this a toggle.
    • Clean up the UI to toggle “developer details”. It’s cool to show those now, but it doesn’t necessarily help the target user. I’ll ask my mentee what he thinks.

    Please let me know if you have any questions by leaving a comment!

    Want to learn more about Xamarin? I suggest Microsoft’s totally awesome Xamarin University. All the classes you need to get started are free.

    Update 2018-11-06:

    • The tags are in two different locations – Tags and Description.Tags. Two different sets of tags are in there, so I’m now combining those lists and getting better results.
    • I found I could get color details. I’ve updated the accent color surrounding the photo. Just a nice design touch.

    I ran into this issue this week. I would define the Source as a URL and then, nothing…

    It turns out, with FFImageLoading, an indispensable Xamarin.Forms plugin available via NuGet, you must also set the ErrorPlaceholder property if loading your image from a URL. That did the trick – images started loading perfectly!

    I’ve reported what I think is a bug. I haven’t yet looked at their code.

    Here’s an example of how I fixed it:

    Working Code:

    <ff:CachedImage 
        Source="{Binding ModelImageUrl}"
        ErrorPlaceholder="icon_errorloadingimage"
        DownsampleToViewSize="True"
        RetryCount="3"
        RetryDelay="1000"
        WidthRequest="320"
        HeightRequest="240"
        Aspect="AspectFit"
        HorizontalOptions="Center" 
        VerticalOptions="Center" />
    

    Non-Working Code, note the missing ErrorPlaceholder property:

    <ff:CachedImage 
        Source="{Binding ModelImageUrl}"
        DownsampleToViewSize="True"
        RetryCount="3"
        RetryDelay="1000"
        WidthRequest="320"
        HeightRequest="240"
        Aspect="AspectFit"
        HorizontalOptions="Center" 
        VerticalOptions="Center" />
    

    I hope that helps others with the same issue. Enjoy!

    I had the need today to display strikethrough text in a Xamarin Forms app. The built-in label control didn’t support such formatting. So, leaning on Unicode’s strikethrough character set, I wrote a function to convert any string to a strikethrough string. To be fair, this works great for the normal character set, so I feel it’s good for most things. Please let me know if your mileage varies.

    Business case: I needed to show a “Was some dollar amount” value. Like “Was $BLAH, and Now BLAH!”

    In my class, I simply called into my strikethrough converter, as follows:

    The property:

    public string StrikeThroughValueText => StrikeThroughValue.HasValue ? $"{ConvertToStrikethrough(StrikeThroughValue.Value.ToString("C"))}" : "???";
    

    The function:

    private string ConvertToStrikethrough(string stringToChange)
    {
        var newString = "";
        foreach (var character in stringToChange)
        {
            newString += $"{character}\u0336";
        }
     
        return newString;
    }
    

    Enjoy! I hope this helps you 🙂

    Link: More about why this works: Combining Long Stroke Overlay.

    I ran into this issue today when debugging on Android, so posting what took an hour to figure out 🙂 This is for when you’re getting a null reference exception when attempting to scan. I was following the instructions here, and then, well, it wouldn’t work 🙂

    Rather than using the Dependency Resolver, you’ll need to pass in the Application Context from Android. So, in the App, create a static reference to the IQrCodeScanner,, as follows:

    	public partial class App : Application
    	{
     
    	    public static IQrCodeScanningService QrCodeScanningService;
    

    Then, populate that static instance from the Android app, as follows:

    App.QrCodeScanningService = new QrCodeScanningService(this);
    global::Xamarin.Forms.Forms.Init(this, bundle);
    LoadApplication(new App());
    

    Obviously you’ll also need a matching constructor, like so:

    public class QrCodeScanningService : IQrCodeScanningService
    {
        private readonly Context _context;
     
        public QrCodeScanningService(Context context)
        {
            _context = context;
        }
    

    This solved the problem like magic for me. I hope it helps you, too!

    P.S. Make sure you have the CAMERA permission. I’ve also read you may also need the FLASHLIGHT permission, although I’m not entirely sure that’s required.

    So I had to deal with this recently. There were many examples out there, many of which didn’t work. Sooo, I’m blogging my code example so others don’t remain stuck 🙂

    In short:

    1. In the XAML, add a CommandParameter binding, and wire up the Clicked event handler.
    2. In the C# Event Handler: Read the (sender as Button).CommandParameter and it’ll be the bound object. Cast / parse accordingly.

    XAML (condensed):

    <ListView x:Name=”LocationsListView”
    ItemsSource=”{Binding Items}”
    VerticalOptions=”FillAndExpand”
    HasUnevenRows=”true”
    RefreshCommand=”{Binding LoadLocationsCommand}”
    IsPullToRefreshEnabled=”true”
    IsRefreshing=”{Binding IsBusy, Mode=OneWay}”
    Refreshing=”LocationsListView_OnRefreshing”
    CachingStrategy=”RecycleElement”>
    <ListView.ItemTemplate>
    <DataTemplate>
    <ViewCell>
    <StackLayout Orientation=”Horizontal” Padding=”5″>
    <StackLayout WidthRequest=”64″>
    <Button
    CommandParameter=”{Binding Id}”
    BackgroundColor=”#4CAF50″
    Clicked=”MapButtonClicked”
    Text=”Map”
    HorizontalOptions=”FillAndExpand”></Button>
    </StackLayout>
    </StackLayout>
    </ViewCell>
    </DataTemplate>
    </ListView.ItemTemplate>
    </ListView>

    C#:

    protected void MapButtonClicked(object sender, EventArgs e)
    {
    var selectedLocation = _viewModel.Items.First(item =>
    item.Id == int.Parse((sender as Button).CommandParameter.ToString()));

    Utility.LaunchMapApp(selectedLocation.Latitude, selectedLocation.Longitude);
    }

    I’m pretty proud of this. Working on the app with the City of Fishers’ support, we’ve brought home a Mira Honorable Mention. After less than a year, we have thousands of users and two six arrests, with hundreds of incidents reported by Fishers residents. Pretty cool. Our team deserves it for all their hard work! Special thanks to Ed Gebhart, Mayor Scott Fadness, Chiefs Mitch Thompson and George Kehl, and the officers and citizens who continue to provide feedback to make this service even better for our community. 🙂

    IBJ Article: https://www.techpoint.org/2017/04/mira-awards-winners-2017/

    Mira Award Plaques

    A little technical detail on the app, for those who are interested:

    Platform: Xamarin with Xamarin.Forms, so we only had to write it once to deploy to iOS and Android. Yes, it really works.

    Development Window: 18 months. Includes test runs with officers and the community.

    Language: C#.

    Time to Deploy to Google Play Store: Less than 15 minutes.

    Time to Deploy to Approve Apple Developer Account: 3 months. They wouldn’t believe we were the City. Even with a phone call from the Mayor. That was an experience!

    Time to Approve App, once we were in: 3 days. They were pretty cool after we were approved. 🙂

     

    Want to learn all about Xamarin and how you can use it, while not spending most of your time watching code scroll by in a video? I figured there was room for an explainer without being a close-captioner for a code tutorial. Enjoy my latest video!

    https://www.youtube.com/edit?video_id=AhvofyQCrhw

    From the description, along with links:

    Have you been considering Xamarin for your cross-platform mobile app? This presentation will help.

    In this non-code-heavy presentation, we’ll discuss:

    * What is Xamarin
    * Development Environment Gotchas
    * Creating a Sample To Do List App without writing any code
    * Reviewing a real Xamarin app that’s “in the wild”
    * Review native, platform-specific integrations
    * Discuss gotchas when using Xamarin, and mobile apps in general
    * Answer audience questions

    Why not code-heavy? Because there are many examples you can follow online. This presentation will provide valuable information you can consider while reviewing the myriad of tutorials available to you with a simple Bing or Google search, or visiting Pluralsight, Microsoft Virtual Academy, or Xamarin University.

    If you have any feedback, please leave in the comments, or ask me on Twitter: @Auri

    Here are the links relevant for this presentation:

    Slides: https://1drv.ms/p/s!AmKBMqPeeM_1-Zd7Y…

    Indy.Code Slides with Cost and Performance Figures: https://1drv.ms/p/s!AmKBMqPeeM_1-JZR4…
    (you can find the Indy.Code() presentation on my YouTube channel)

    Google Xamarin vs. Native iOS with Swift/Objective C vs. Android with Java Performance Article: https://medium.com/@harrycheung/mobil…

    Example code for push notifications, OAuth Twitter/Facebook/Google authentication, and more: https://github.com/codemillmatt/confe…

    Link to Microsoft Dev Essentials for $30/month free Azure credit and free Xamarin training: https://aka.ms/devessentials

    Microsoft Virtual Academy Multi-Threading Series: https://mva.microsoft.com/en-us/train…

     

    I’m continuing my resolution to record as many of my programming and technical presentations as possible. I recently spoke at the inaugural Indy.Code() conference. It was excellent, with an incredible speaker line-up. I hope they, too, post some of their presentations online!

    Watch the Video on YouTube

    From the synopsis:

    Should you write your app “native” or use a “cross-platform” solution like React Native, Xamarin, or NativeScript? The new wave of native-cross-compiling solutions provide significant cost savings, code reuse opportunities, and lower technical debt. Does wholly native, per platform development, still play a role in future mobile development? Let’s discuss together.

    In this presentation, we’ll discuss:

    • The growth of native, hybrid, and cross-platform mobile development solutions
    • Cost analysis of multiple native and cross-platform apps
    • Considerations for each native and cross-platform solution
    • Lessons learned

    Slides are available here: https://t.co/5iLhEoEfen

    If you have any questions, I’m happy to answer them! Please email me or ask on Twitter.