Archive for the ‘Android’ Category

So I recently ran into an issue where Microsoft AppCenter wouldn’t build my Android APK… It would find the source, build successfully, then fail to find the resulting APK.

If you’re running into this issue, try the following:

This is based on @johnclarete’s idea, which I had to modify due to it relying on Flutter’s config.

  1. Modify the build.gradle file in your Application folder as follows:

Find the android { part of the file and paste in the buildTypes part:

android {
    compileSdkVersion 30

    buildTypes {
        appcenter {
            applicationVariants.all { variant ->
                variant.outputs.all {
                    def currentFile = new File(outputFileName)
                    def filename = currentFile.getName()
                    outputFileName = "../../../../../Application/Application/build/outputs/apk/${filename}"
                }
            }
        }
    }
  1. In AppCenter, you should now have an “appcenter” Build Variant option.
image
  1. Disable Build Android App Bundle.

That worked for me. I hope this helps others.

A friend reached out today and asked “Hey, I need my splash screen to be themed based on whether Dark Mode is selected on the Android device.” I had never done that before, so I was curious how it should be done.

Dark Mode is new to Android 10, a.k.a. Android Q. Not as tasty, but hey, gotta ship something. It has a range of benefits, such as lower energy consumption, looking cool, and frustrating developers without budget for theme-aware apps.

It turns out, after some sleuthing, it’s relatively straightforward.

First, this article assumes you’re following the “splash screen as a theme” approach, which you can learn more about here. The example is for Xamarin.Forms, but the same approach applies to regular Android development.

Basically, you have a “splashscreen” style, and you set it as your app’s theme in the Android manifest. Then, you “swap” to the real theme in MainActivity. For example, what I use in an app, located in resources/values/styles.xml:

  <!-- Splash screen style -->   <style name="splashscreen" parent="Theme.AppCompat.DayNight">     <item name="android:windowBackground">@drawable/splash</item>     <item name="android:windowNoTitle">true</item>     <item name="android:windowIsTranslucent">false</item>     <item name="android:windowIsFloating">false</item>     <item name="android:backgroundDimEnabled">true</item>   </style>

Note my drawable. I want a different drawable for my dark vs. light (normal) theme. Here’s what is different:

  • The parent is now Theme.AppCompat.DayNight
  • I’ve added a different set of drawable folders for the Dark theme images. These are the same folder names, with -night appended to the end:

different drawable-night folders

In this example, I haven’t yet added the other folder variations, but you get the point.

The theme swap code in MainActivity is as follows:

protected override void OnCreate(Bundle savedInstanceState)
{     TabLayoutResource = Resource.Layout.Tabbar;     ToolbarResource = Resource.Layout.Toolbar;     // Swap back to the normal app theme. We used Splash so we didn't have to create a special activity.      // Cute hack, and better approach.     // Idea from URL: https://xamarinhelp.com/creating-splash-screen-xamarin-forms/     Window.RequestFeature(WindowFeatures.ActionBar);     SetTheme(Resource.Style.MainTheme);

That’s all there is to it. If Dark mode is enabled, the splash.png from the -night folder will be used, otherwise the normal image will takes its rightful place.

If you have any questions, please hit me up in the comments.

Special thanks to this StackOverflow article for the –night hint.

More info on Android Dark Theme can be found here.

I recently started in the Fishers Youth Mentoring Initiative, and my mentee is a young man in junior high who really likes lizards. He showed me photos of them on his iPad, photos of his pet lizard, and informed me of many lizard facts. He’s also a talented sketch artist – showcasing many drawings of Pokemon, lizards and more. Oh, yeah, he’s also into computers and loves his iPad.

Part of the mentoring program is to help with school, being there as they adjust to growing up, and both respecting and encouraging their interests.

It just so happens that he had a science project coming up. He wasn’t sure what to write about. His pet lizard recently had an attitude shift, and he figured it was because it wasn’t getting as much food week over week. Changing that, he realized its attitude changed. So, he wanted to cover that somehow.

Seeing his interest in lizards, drawing, and computers I asked if we could combine them. I suggested we build an app, a “Reptile Tracker,” that would help us track reptiles, teach others about them, and show them drawings he did. He loved the idea.

Planning

We only get to meet for 30 minutes each week. So, I gave him some homework. Next time we meet, “show me what the app would look like.” He gleefully agreed.

One week later, he proudly showed me his vision for the app:

Reptile Tracker

I said “Very cool.” I’m now convinced “he’s in” on the project, and taking it seriously.

I was also surprised to learn that my expectations of “show me what it would look like” were different from what I received from someone both much younger than I and with a different world view. To him, software may simply be visualized as an icon. In my world, it’s mockups and napkin sketches. It definitely made me think about others’ perceptions!

True to software engineer and sort-of project manager form, I explained our next step was to figure out what the app would do. So, here’s our plan:

  1. Identify if there are reptiles in the photo.
  2. Tell them if it’s safe to pick it up, if it’s venomous, and so forth.
  3. Get one point for every reptile found. We’ll only support Lizards, Snakes, and Turtles in the first version.

Alright, time for the next assignment. My homework was to figure out how to do it. His homework was to draw up the Lizard, Snake, and Turtle that will be shown in the app.

Challenge accepted!

I quickly determined a couple key design and development points:

  • The icon he drew is great, but looks like a drawing on the screen. I think I’ll need to ask him to draw them on my Surface Book, so they have the right look. Looks like an opportunity for him to try Fresh Paint on my Surface Book.
  • Azure Cognitive Services, specifically their Computer Vision solution (API), will work for this task. I found a great article on the Xamarin blog by Mike James. I had to update it a bit for this article, as the calls and packages are a bit different two years later, but it definitely pointed me in the right direction.

Writing the Code

The weekend came, and I finally had time. I had been thinking about the app the remainder of the week. I woke up early Saturday and drew up a sketch of the tracking page, then went back to sleep. Later, when it was time to start the day, I headed over to Starbucks…

20181105_083756

I broke out my shiny new MacBook Pro and spun up Visual Studio Mac. Xamarin Forms was the perfect candidate for this project – cross platform, baby! I started a new Tabbed Page project, brought over some code for taking photos with the Xam.Plugin.Media plugin and resizing them, and the beta Xamarin.Essentials plugin for eventual geolocation and settings support. Hey, it’s only the first week Smile

Side Note: Normally I would use my Surface Book. This was a chance for me to seriously play with MFractor for the first time. Yay, even more learning this weekend!

Now that I had the basics in there, I created the interface for the Image Recognition Service. I wanted to be able to swap it out later if Azure didn’t cut it, so Dependency Service to the rescue! Here’s the interface:

using System.IO;
using System.Threading.Tasks;
using Microsoft.Azure.CognitiveServices.Vision.ComputerVision.Models;
 
namespace ReptileTracker.Services
{
     public interface IImageRecognitionService
     {
         string ApiKey { get; set; }
         Task<ImageAnalysis> AnalyzeImage(Stream imageStream);
     }
}

Now it was time to check out Mike’s article. It made sense, and was close to what I wanted. However, the packages he referenced were for Microsoft’s Project Oxford. In 2018, those capabilities have been rolled into Azure as Azure Cognitive Services. Once I found the updated NuGet package – Microsoft.Azure.CognitiveServices.Vision.ComputerVision – and made some code tweaks, I ended up with working code.

A few developer notes for those playing with Azure Cognitive Services:

  • Hold on to that API key, you’ll need it
  • Pay close attention to the Endpoint on the Overview page – you must provide it, otherwise you’ll get a 403 Forbidden

image

And here’s the implementation. Note the implementation must have a parameter-less constructor, otherwise Dependency Service won’t resolve it.

using Microsoft.Azure.CognitiveServices.Vision.ComputerVision;
using Microsoft.Azure.CognitiveServices.Vision.ComputerVision.Models;
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.IO;
using System.Threading.Tasks;
using ReptileTracker.Services;
using Xamarin.Forms;
 
[assembly: Dependency(typeof(ImageRecognitionService))]
namespace ReptileTracker.Services
{
    public class ImageRecognitionService : IImageRecognitionService
    {
        /// <summary>
        /// The Azure Cognitive Services Computer Vision API key.
        /// </summary>
        public string ApiKey { get; set; }
 
        /// <summary>
        /// Parameterless constructor so Dependency Service can create an instance.
        /// </summary>
        public ImageRecognitionService()
        {
 
        }
 
        /// <summary>
        /// Initializes a new instance of the <see cref="T:ReptileTracker.Services.ImageRecognitionService"/> class.
        /// </summary>
        /// <param name="apiKey">API key.</param>
        public ImageRecognitionService(string apiKey)
        {
 
            ApiKey = apiKey;
        }
 
        /// <summary>
        /// Analyzes the image.
        /// </summary>
        /// <returns>The image.</returns>
        /// <param name="imageStream">Image stream.</param>
        public async Task<ImageAnalysis> AnalyzeImage(Stream imageStream)
        {
            const string funcName = nameof(AnalyzeImage);
 
            if (string.IsNullOrWhiteSpace(ApiKey))
            {
                throw new ArgumentException("API Key must be provided.");
            }
 
            var features = new List<VisualFeatureTypes> {
                VisualFeatureTypes.Categories,
                VisualFeatureTypes.Description,
                VisualFeatureTypes.Faces,
                VisualFeatureTypes.ImageType,
                VisualFeatureTypes.Tags
            };
 
            var credentials = new ApiKeyServiceClientCredentials(ApiKey);
            var handler = new System.Net.Http.DelegatingHandler[] { };
            using (var visionClient = new ComputerVisionClient(credentials, handler))
            {
                try
                {
                    imageStream.Position = 0;
                    visionClient.Endpoint = "https://eastus.api.cognitive.microsoft.com/";
                    var result = await visionClient.AnalyzeImageInStreamAsync(imageStream, features);
                    return result;
                }
                catch (Exception ex)
                {
                    Debug.WriteLine($"{funcName}: {ex.GetBaseException().Message}");
                    return null;
                }
            }
        }
 
    }
}

And here’s how I referenced it from my content page:

pleaseWait.IsVisible = true;
pleaseWait.IsRunning = true;
var imageRecognizer = DependencyService.Get<IImageRecognitionService>();
imageRecognizer.ApiKey = AppSettings.ApiKey_Azure_ImageRecognitionService;
var details = await imageRecognizer.AnalyzeImage(new MemoryStream(ReptilePhotoBytes));
pleaseWait.IsRunning = false;
pleaseWait.IsVisible = false;

var tagsReturned = details?.Tags != null 
                   && details?.Description?.Captions != null 
                   && details.Tags.Any() 
                   && details.Description.Captions.Any();

lblTags.IsVisible = true; 
lblDescription.IsVisible = true; 

// Determine if reptiles were found. 
var reptilesToDetect = AppResources.DetectionTags.Split(','); 
var reptilesFound = details.Tags.Any(t => reptilesToDetect.Contains(t.Name.ToLower()));  

// Show animations and graphics to make things look cool, even though we already have plenty of info. 
await RotateImageAndShowSuccess(reptilesFound, "lizard", details, imgLizard);
await RotateImageAndShowSuccess(reptilesFound, "turtle", details, imgTurtle);
await RotateImageAndShowSuccess(reptilesFound, "snake", details, imgSnake);
await RotateImageAndShowSuccess(reptilesFound, "question", details, imgQuestion);

That worked like a champ, with a few gotchas:

  • I would receive a 400 Bad Request if I sent an image that was too large. 1024 x 768 worked, but 2000 x 2000 didn’t. The documentation says the image must be less than 4MB, and at least 50×50.
  • That API endpoint must be initialized. Examples don’t always make this clear. There’s no constructor that takes an endpoint address, so it’s easy to miss.
  • It can take a moment for recognition to occur. Make sure you’re using async/await so you don’t block the UI Thread!

Prettying It Up

Before I get into the results, I wanted to point out I spent significant time prettying things up. I added animations, different font sizes, better icons from The Noun Project, and more. While the image recognizer only took about an hour, the UX took a lot more. Funny how that works.

Mixed Results

So I was getting results. I added a few labels to my view to see what was coming back. Some of them were funny, others were accurate. The tags were expected, but the captions were fascinating. The captions describe the scene as the Computer Vision API sees it. I spent most of the day taking photos and seeing what was returned. Some examples:

  • My barista, Matt, was “a smiling woman working in a store”
  • My mom was “a smiling man” – she was not amused

Most of the time, as long as the subjects were clear, the scene recognition was correct:

Screenshot_20181105-080807

Or close to correct, in this shot with a turtle at Petsmart:

tmp_1541385064684

Sometimes, though, nothing useful would be returned:

Screenshot_20181105-080727

I would have thought it would have found “White Castle”. I wonder if it won’t show brand names for some reason? They do have an OCR endpoint, so maybe that would be useful in another use case.

Sometimes, even though I thought an image would “obviously” be recognized, it wasn’t:

Screenshot_20181105-081207

I’ll need to read more about how to improve accuracy, if and whether that’s even an option.

Good thing I implemented it with an interface! I could try Google’s computer vision services next.

Next Steps

We’re not done with the app yet – this week, we will discuss how to handle the scoring. I’ll post updates as we work on it. Here’s a link to the iOS beta.

Some things I’d like to try:

  • Highlight the tags in the image, by drawing over the image. I’d make this a toggle.
  • Clean up the UI to toggle “developer details”. It’s cool to show those now, but it doesn’t necessarily help the target user. I’ll ask my mentee what he thinks.

Please let me know if you have any questions by leaving a comment!

Want to learn more about Xamarin? I suggest Microsoft’s totally awesome Xamarin University. All the classes you need to get started are free.

Update 2018-11-06:

  • The tags are in two different locations – Tags and Description.Tags. Two different sets of tags are in there, so I’m now combining those lists and getting better results.
  • I found I could get color details. I’ve updated the accent color surrounding the photo. Just a nice design touch.

I ran into this issue today when debugging on Android, so posting what took an hour to figure out 🙂 This is for when you’re getting a null reference exception when attempting to scan. I was following the instructions here, and then, well, it wouldn’t work 🙂

Rather than using the Dependency Resolver, you’ll need to pass in the Application Context from Android. So, in the App, create a static reference to the IQrCodeScanner,, as follows:

	public partial class App : Application
	{
 
	    public static IQrCodeScanningService QrCodeScanningService;

Then, populate that static instance from the Android app, as follows:

App.QrCodeScanningService = new QrCodeScanningService(this);
global::Xamarin.Forms.Forms.Init(this, bundle);
LoadApplication(new App());

Obviously you’ll also need a matching constructor, like so:

public class QrCodeScanningService : IQrCodeScanningService
{
    private readonly Context _context;
 
    public QrCodeScanningService(Context context)
    {
        _context = context;
    }

This solved the problem like magic for me. I hope it helps you, too!

P.S. Make sure you have the CAMERA permission. I’ve also read you may also need the FLASHLIGHT permission, although I’m not entirely sure that’s required.

Alright, I found a Moto 360 and I’m enjoying it. The following is not my review. It is a list of bugs Motorola and Google need to fix on this device and across Android Wear. Note this is only what I’ve noticed after one day. I’ll post more as I explore.

  • When you take the phone out of the box, it doesn’t turn on or has a low battery. That’s understandable. What’s not alright is no prompt about the battery level or what to do. It’s simply “Connect your device to Android Wear,” or something to that effect. That’s very un-user-friendly. Where were the UX guys with the setup process?
  • Only one watch face shows the date. $250 and no date? Seriously?
    • Update, thanks to Rich DeMuro: Drag down slightly to see the date.
  • When asking the watch to make a call to a contact with more than one number, it asks "Which One?" However, it doesn’t give you a list. Saying "the first one" works, but I don’t know what I selected until it dials.
  • There’s no confirmation request when sending a text… it just sends it.
  • It sometimes stops listening or lists your options when listening.
  • It sometimes starts listening when it shouldn’t.
  • Carrier messaging apps break the ability to reply to texts. I had to disable Verizon Messaging entirely.
  • Facebook support for displaying the new comments would be nice, like the email display feature.
  • There’s no battery level meter anywhere on the device, or at least that’s obvious.
    • Update, thanks to Rich DeMuro: Drag down slightly from the top to see battery level.
  • The Android Wear app doesn’t show battery level, but Moto Connect does. Weird?
  • Sometimes Google search results take precedence over actions. For example, saying "play ebay by weird al" brings up YouTube results. However, "play technologic by daft punk" plays the song. It’s hit or miss.
  • So far, adding a calendar entry hasn’t worked.
  • There needs to be a notification center to control which notifications are sent to the phone. Yes, you can do it via the App Manager, but it’s horrible.
  • The accelerometer doesn’t always sense the wrist has been moved to a viewing angle.
  • When driving, the accelerometer appears to trigger the display to turn on *a lot*. It’s not good when driving kills your battery.
  • A speaker would be helpful for prompts.

Added 9/15 afternoon:

  • The Motorola Feedback website doesn’t list the Moto360 as a product. So, how do I register it or get support?
  • The device occasionally says its Offline when the phone is only three feet away. I’m thinking this is a bug in the Google Now integration and not an actual communications issue.
  • Asking the device "What is the battery level" always causes the phone to report it’s offline

Added 9/17:

  • Saying “Call <insert name here> on cell” doesn’t work most of the time, but saying the same “on mobile” is generally reliable.
  • Calling “Send text to <insert name here>” sometimes asks “Which one?” but only shows the phone numbers. I wasn’t sure if I was sending to the right person because the name wasn’t listed.
  • Most of the time, when the screen turns on when moving even the slightest, the watch starts listening, even if I don’t say “Ok, Google”. It’s very annoying.
  • It would be nice if “Ok, Google” could be changed to something else. I feel like I’m advertising Google every time I use my watch.
  • The pedometer seems inaccurate, rendering phantom steps as far as I can tell. The inaccuracy extends to the heart rate monitor. After a long workout, the monitor said I was at 74 bpm, then 90. I took my own pulse, and it was quite off the mark.

Added 9/30:

  • The latest build, 4.4W.1 KGW42R, has greatly improved battery life. On an average day of use, unplugging the watch at around 7am, I was still at 20% at roughly 9:45pm. Great job, Motorola!
  • Even with Messaging as the default app, I have no option to Reply to texts when the notification appears. This may be due to HTC overriding some default app, but I’m unsure.

A few tips:

To launch apps, go to the Google screen, then go to Start… and you can select an app.

You can say the following things and it’s really cool:

  • Call <person’s name> on mobile
  • Play the song <song name>
  • Play the song <song name> by <artist name>
  • What is the current stock price of <company name>

I’ve been putting off finishing my HTC One M8 review for a couple months. I’m hoping to finish it soon, but for now, here’s my draft…

A Dilemma

Before I start my review, I need to explain the technology dilemma of new phones, and new laptops and desktops, too, for that matter. Technology has come to a performance and feature point that it’s hard for manufacturers to prove any necessity their new products in these categories. Case in point – my previous phone, the Galaxy Nexus, was perfectly fast for everything I did with it. Sure, it wouldn’t launch apps or take photos as quickly as the newer devices, but it was acceptably fast, so much so that, as I shopped for a new product, the newer devices weren’t obviously beneficial.

I imagine my dilemma similarly affects the PC market. For the average consumer, is the laptop of today that much better than the laptop of two years ago? If you spend most of your time plugged in, as many users I’ve met do, will they notice the processor speed? The display? They’ll definitely recognize the SSD speed and touch. Yet their old systems are acceptably fast. Lucky for them, new laptops are affordable. Desktop PCs? That’s a different story – there’s nothing really new about them that you’d need to upgrade, and you don’t see many shipping with SSDs.

Phones, unlike laptops and desktops, are lucky in that they are a) popular to drive consumers to buy when upgrades are unnecessary, and b) have sex appeal. You rarely tell anyone these days about their chic new laptop. Well, you used to… That desire has shifted to the phone, now a mini laptop in itself. Yet, beyond the better battery life, what makes a phone better today, other than you can get a new model up front, and paid off [again] in two years?

Anyway, I ignored all that introspection and needs analysis. I bought HTC One M8.

The Phone

First, let’s talk about the One. It’s beautiful. It’s slick. A bit too slick, as the aluminum is so smooth I often was afraid it would fall out of my hands. Thankfully, HTC provides one free screen replacement in the first six months. I like little support touches like that. The HTC Dot View case solved my grippiness issue, which I’ll discuss below. Wow, though – it’s a beautiful phone. I had a number of people ask me “Hey, what phone is that?” and often times heard “I think I’ll be switching to an Android phone next. Wow, that screen is big.” Maybe Google should be courting HTC for the its next Galaxy phone?

The Camera

The HTC One takes great photos. So why isn’t it my favorite camera? First, we need to explain the difference between HTC’s approach to phone cameras compared to practically everybody else: bigger pixel sensor size versus more pixels. The One sports 2 micron sensors vs. the 1.3 micron sensors used by practically every other flagship phone from Samsung, Google, and even Nokia. However, it only has a 4 megapixel effective resolution, versus 13+ on the others. True, the larger sensors bring in more light, and make the HTC One an excellent low light level camera. But when it comes to image quality, that lack of additional resolution makes every shot a make-it-or-break-it affair. With a 16 megapixel imager, for example, you could get a large shot and crop to something perfect. But with 4 megapixels, you’ve got to get it right the first time, lest you risk cropping to Facebook resolution. Definitely nothing good to print, and sometimes so few pixels there’s nothing good to display, either.

To be fair, the One takes excellent photos. Albeit quite a bit overexposed when there’s too much light… You can’t get balanced exposure between, say, the sky and the grass on a partly cloudy day. If you focus on the grass, the sky turns white. If you focus on the sky, the grass turns almost black. It sounds like something that can be solved with software… I’m hoping HTC has something in the works.

A few bugs I noticed, in case HTC is listening:

  • You can’t add stickers to a photo taken with a flash or low light. I have no idea why.
  • U Focus is not available for flash or low light photos, either.
  • Facebook uploads from the HTC One M8 appear to be very low resolution. I’ve seen this issue on many HTC Android phones. It looks like HTC has their own Facebook for HTC, but I can’t exactly confirm which uploader is being used when sharing.

The Dot View Case – The Sleeper Accessory Success story to what Austin Powers was to Sleeper Movie Successes

Long title, but true. The Dot View case may seem like a gimmick, but it does a great job at what it’s supposed to do. Lined with little holes that form letters and shapes when combined with the One’s screen gestures, you can check the time, make a phone call, answer and decline phone calls, and see if you have any messages all without ever looking at your screen. Samsung and other manufacturers have done similar things by putting cutouts in cases, too. Yet HTC’s approach is unique, and very, very cool. I think many folks who have seen my little demos of the Dot View case are thinking the One is their next phone. Maybe it’s just sheer luck for HTC, but I don’t think I’ve met anyone who’s contract isn’t about to expire this year. Good thing I’m not in charge of a survey! <grin>

Ok, learned this sort of the hard way today… I picked up the brand spanking new HTC One M8 yesterday. So far it’s a fantastic phone. I wanted to add a 32 GB MicroSD card, since it wonderfully supports such expansion. Beware! There’s a little tray that comes out when you use the paper clip in the little hole. Put the MicroSD card in that tray! I thought it was simply a placeholder at first, so I slyly proceeded to simply insert the card into the hole. Whoops!

imageIf you fall into the same trap, it’s easy to get the MicroSD card out. First, you might as well finish the formatting steps – it’s in there anyway. When that’s done, use the paperclip to release the MicroSD card from the tray. Yes, I know it won’t come out all the way. After releasing it via the eject hole, use the side of the paperclip to gently pull the card out from the right side a little bit. Once you can see the plastic of the card, pull it out the rest of the way with your fingers. Problem solved!

Good luck!

-Auri