en telerik-blogs-web https://static.feedpress.com/logo/telerik-blogs--web-5ab52bef0a72f.jpg Telerik Blogs | Web The official blog of Progress Telerik - expert articles and tutorials for developers. uuid:67ca862f-34dc-438d-8cc7-6d617ee01e6d;id=1609 2025-02-26T04:51:04Z urn:uuid:9364b9d7-fd76-4f29-a17a-fb05ddc8b270 Angular Resource and rxResource See how to handle asynchronous behavior with Angular resource and rxResource APIs. 2025-02-24T12:40:20Z 2025-02-26T04:51:04Z Hassan Djirdeh See how to handle asynchronous behavior with Angular resource and rxResource APIs.

Angular 19 introduces two experimental APIs—resource and rxResource—designed to simplify handling asynchronous dependencies within Angular’s reactive framework. These APIs elegantly manage evolving data, such as API responses or other async operations, by tightly integrating with Angular signals.

In this article, we’ll explore these new APIs, showcase their use cases through some examples and provide insights into how they enhance asynchronous workflows.

Resource API

The resource API bridges Angular’s signal-based state management with asynchronous operations. Combining request signals, loader functions and resource instances provides a declarative way to manage asynchronous data.

For an introduction to Angular Signals, be sure to check out the articles we’ve recently published—Angular Basics: Signals and Angular Basics: Input, Output, and View Queries.

Imagine a scenario where we want to fetch weather details based on a city name.

import { Component, signal, resource } from '@angular/core';

@Component({
  selector: 'app-weather-info',
  templateUrl: './weather-info.component.html',
  styleUrls: ['./weather-info.component.css']
})
export class WeatherInfoComponent {
  city = signal<string>('New York');

  weatherData = /* fetch weather details */
}

Using resource, the setup becomes reactive and streamlined:

import { Component, signal, resource } from "@angular/core";

@Component({
  selector: "app-weather-info",
  templateUrl: "./weather-info.component.html",
  styleUrls: ["./weather-info.component.css"],
})
export class WeatherInfoComponent {
  city = signal<string>("New York");

  weatherData = resource({
    request: this.city,
    loader: async ({ request: cityName }) => {
      const response = await fetch(
        `https://api.weatherapi.com/v1/current.json?q=${cityName}`
      );
      if (!response.ok) {
        throw new Error("Failed to fetch weather details");
      }
      return await response.json();
    },
  });

  updateCity(newCity: string): void {
    this.city.set(newCity);
  }
}

In the above example, the city signal acts as the input, determining which weather details to fetch. Any updates to city automatically retrigger the loader function, fetching new data based on the updated city name. The weatherData resource instance tracks the current data (value), loading status (isLoading) and any errors (error).

Dynamic Updates Without Fetching

The update method allows modifying local resource data without triggering a server request. For instance, we can add a local timestamp to the weather data:

this.weatherData.update((data) => {
  if (!data) return undefined;
  return { ...data, timestamp: new Date().toISOString() };
});

This capability ensures immediate UI updates while preserving reactivity.

rxResource

For applications deeply integrated with RxJS, rxResource provides an observable-based counterpart to resource. It seamlessly connects signals to observables, enabling a more reactive approach to data fetching.

Suppose we want to display a paginated list of books in a library system.

import { Component, signal } from '@angular/core';
import { rxResource } from '@angular/core/rxjs-interop';
import { HttpClient } from '@angular/common/http';

@Component({
  selector: 'app-library',
  templateUrl: './library.component.html',
  styleUrls: ['./library.component.css']
})
export class LibraryComponent {
  page = signal<number>(1);

  books = /* fetch books */
}

The rxResource function makes this simple:

import { Component, signal } from "@angular/core";
import { rxResource } from "@angular/core/rxjs-interop";
import { HttpClient } from "@angular/common/http";

@Component({
  selector: "app-library",
  templateUrl: "./library.component.html",
  styleUrls: ["./library.component.css"],
})
export class LibraryComponent {
  page = signal<number>(1);

  books = rxResource({
    request: this.page,
    loader: (params) =>
      this.http.get<Book[]>(
        `https://api.example.com/books?page=${params.request}`
      ),
  });

  constructor(private http: HttpClient) {}

  updatePage(newPage: number): void {
    this.page.set(newPage);
  }
}

Here, the page signal represents the current page number. Changes to page automatically trigger the loader function to fetch books for the specified page. Using rxResource enables Angular’s reactive model to integrate smoothly with observable streams.

For an introduction to observables in Angular, check the following articles: Angular Basics: Introduction to Observables (RxJS)—Part 1 and Angular Basics: Introduction to Observables (RxJS)—Part 2.

Wrap-up

The resource and rxResource APIs offer a declarative and reactive approach to managing asynchronous workflows in Angular. They address challenges like race conditions, integrated state tracking and request cancellations while enabling seamless updates tied to signal changes.

These APIs enhance the framework’s reactive capabilities by tightly coupling Angular’s signal system with asynchronous data handling. Though experimental, they represent a forward-thinking approach to handling dynamic data and promise to become essential tools in Angular’s asynchronous programming landscape. For more details on resource and rxResource, be sure to check out the following resources:

]]>
urn:uuid:5dcfed56-98d7-4d24-99c7-66206ec7009a Image Manipulation with NestJS and Sharp Learn the importance of image optimization and how to integrate the Sharp library with NestJS to perform some image manipulation techniques. 2025-02-21T16:08:56Z 2025-02-26T04:51:04Z Christian Nwamba Learn the importance of image optimization and how to integrate the Sharp library with NestJS to perform some image manipulation techniques.

In this post, we will create an image manipulation web app using the Sharp library and NestJS. We will learn the importance of image optimization and how to integrate the Sharp library with NestJS to perform image manipulation techniques like blurring, changing image format, rotating, flipping, etc.

What Is Sharp?

Sharp is an easy-to-use Node.js library for image processing. It is widely used because of its speed, output image quality and minimal code requirement.

For example, the code for resizing an image would look like this:

async function resizeImage() {
  try {
    await sharp("house.jpg")
      .resize({
        width: 40,
        height: 107,
      })
      .toFile("resized-house.jpg");
  } catch (error) {
    console.log(error);
  }
}

What Is NestJS?

NestJS is a backend Node.js framework widely loved for its modular architecture. This architecture promotes separation of concerns, allowing developers and teams to quickly build scalable and maintainable server-side applications.

NestJS also has several built-in modules to handle common tasks like file uploads and serving static files, which we will use to integrate Sharp into our app.

Importance of Image Optimization

While images enhance websites, unoptimized ones can slow them down. Unoptimized images lead to longer load times, poor user experience and increased website storage costs. This can be a significant difference between an amazing and a terrible site.

Image optimization means delivering high-quality visuals in an efficient format and size. Optimizing images can directly impact user retention and search engine rankings, so optimizing images is very important.

Areas for Image Optimization

Some key areas to consider are file format, image quality and size.

File Format

When choosing a file format, you first have to distinguish between older and more widely supported formats and newer, better-performing formats.

Older Formats:

  • JPEG – A popular option for photos and detailed images. Think of images on a photography portfolio website.
  • PNG – Think of graphics, images with sharp contrasts or transparent backgrounds.

Newer Formats:

  • WebP – This is a good choice when prioritizing performance. It significantly reduces file size and is supported by most modern browsers.
  • AVIF – This is another good choice for performance, often giving better compression than WebP and producing smaller files at the same quality level, although it is not as widely adopted as WebP.

Newer formats like WebP and AVIF are becoming more preferred, and fallback images can be used for compatibility with browsers that do not support them.

Quality and Size

  • Optimize for the content – Tailor image quality based on purpose. Photos can tolerate more compression with lower quality, while images with text, fine lines or logos should have higher quality to preserve sharpness.
  • Always resize before use – A pro tip is to resize images to the exact dimensions they will be used. This avoids unnecessarily large file sizes, improving rendering speeds and overall efficiency.

Project Setup

To begin, you’ll need to install the NestJS CLI if you haven’t done so already. Run the following command to install it:

npm i -g @nestjs/cli

Next, let’s use the NestJS CLI to create a new project by running the following command:

nest new sharp-demo

You will be prompted to pick a package manager. For this article, we’ll use npm (node package manager).

Package manager options

Once the installation is done, you’ll have a new folder called sharp-demo. This will be our project directory, and you can navigate to it by running this command:

cd sharp-demo

Next, let’s run the following command to scaffold an images module:

nest g resource images

Then select the REST API as shown in the image below.

Create images module

Install Dependencies

Run the following command to install the dependencies we will need for our project:

npm i sharp && npm i --save @nestjs/serve-static && npm i -D @types/multer

The command above installs the sharp package, which we’ll use for image manipulation; the NestJS serve-static package, which we will use to serve our index.html; and the Multer typings package, which we will use in our file interceptor to extract our image.

Serve HTML Page

For the frontend of our app, we will create an HTML page and serve it with the NestJS ServeStaticModule.

Update your app.module.ts file with the following:

import { Module } from "@nestjs/common";
import { AppController } from "./app.controller";
import { AppService } from "./app.service";
import { ImagesModule } from "./images/images.module";
import { ServeStaticModule } from "@nestjs/serve-static";
import { join } from "path";

@Module({
  imports: [
    ServeStaticModule.forRoot({
      rootPath: join(__dirname, "..", "public"),
    }),
    ImagesModule,
  ],
  controllers: [AppController],
  providers: [AppService],
})
export class AppModule {}

In the code above, we configure our app to serve static files. We also point to the public directory located one level above the current directory as the directory containing our static files.

Next, create a folder called public at the root of your project, and in it, create two files named index.html and script.js.

The project directory should now look like this:

Project directory

Update the index.html file with the following:

<!DOCTYPE html>
<html lang="en">
  <head>
    <meta charset="UTF-8" />
    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
    <title>Image Manipulation with NestJS and Sharp</title>
    <style>
      body {
        font-family: Arial, sans-serif;
        margin: 20px;
      }
      .form-group {
        margin-bottom: 15px;
      }
      #output-container {
        margin-top: 20px;
        display: flex;
        gap: 20px;
      }
      img {
        max-width: 300px;
        max-height: 300px;
        object-fit: contain;
        border: 1px solid #ddd;
        padding: 5px;
      }
    </style>
  </head>
  <body>
    <h1>Image Manipulation with NestJS and Sharp</h1>
    <form id="imageForm">
      <div class="form-group">
        <label for="fileInput">Upload Image:</label>
        <input
          type="file"
          id="fileInput"
          name="file"
          accept="image/*"
          required
        />
      </div>
      <div class="form-group">
        <label for="technique">Choose Manipulation Technique:</label>
        <select id="technique" name="technique" required>
          <option value="blur">Blur</option>
          <option value="rotate">Rotate</option>
          <option value="tint">Tint</option>
          <option value="grayscale">Grayscale</option>
          <option value="flip">Flip</option>
          <option value="flop">Flop</option>
          <option value="crop">Crop</option>
          <option value="createComposite">Create Composite Image</option>
        </select>
      </div>
      <div class="form-group">
        <label for="format">Choose Output Format:</label>
        <select id="format" name="format" required>
          <option value="jpeg">JPEG</option>
          <option value="png">PNG</option>
          <option value="webp">WebP</option>
          <option value="avif">AVIF</option>
        </select>
      </div>
      <button type="submit">Submit</button>
    </form>

    <div id="output-container" style="display: none">
      <div>
        <h3>Original Image</h3>
        <img
          id="originalImage"
          width="300"
          height="300"
          src="#"
          alt="Original Image"
        />
      </div>
      <div>
        <h3>Manipulated Image</h3>
        <img id="manipulatedImage" src="#" alt="Manipulated Image" />
      </div>
    </div>

    <script src="script.js"></script>
  </body>
</html>

Also update the script.js file with the following:

document.addEventListener("DOMContentLoaded", () => {
  const form = document.getElementById("imageForm");
  const manipulatedImage = document.getElementById("manipulatedImage");
  const outputContainer = document.getElementById("output-container");
  const originalImage = document.getElementById("originalImage");

  form.addEventListener("submit", async (e) => {
    e.preventDefault();

    const formData = new FormData(form);
    const file = formData.get("file");
    if (!file) return;

    try {
      const response = await fetch("http://localhost:3000/images/process", {
        method: "POST",
        body: formData,
      });

      const result = await response.json();

      if (result.imageBase64) {
        manipulatedImage.src = result.imageBase64;

        const reader = new FileReader();
        reader.onloadend = function () {
          originalImage.src = reader.result;
        };
        reader.readAsDataURL(file);

        outputContainer.style.display = "flex";
      } else {
        alert("File upload failed");
      }
    } catch (error) {
      console.error("Error during upload:", error);
    }
  });
});

In the code above, we extract the processed image in Base64 format from the response and set it as the source of the manipulatedImage element, which allows it to be displayed dynamically.

Set Up Images Controller

Next, let’s update our ImagesController to handle file uploads and return the processed image in Base64 format.

Update the images.controller.ts file with the following:

import {
  Body,
  Controller,
  Post,
  UploadedFile,
  UseInterceptors,
} from "@nestjs/common";
import { ImagesService } from "./images.service";
import { FileInterceptor } from "@nestjs/platform-express";
import * as sharp from "sharp";

@Controller("images")
export class ImagesController {
  constructor(private readonly imagesService: ImagesService) {}

  @Post("process")
  @UseInterceptors(FileInterceptor("file"))
  async uploadAndProcessFile(
    @UploadedFile() file: Express.Multer.File,
    @Body()
    body: {
      technique: string;
      format: keyof sharp.FormatEnum;
    }
  ) {
    const base64Image = await this.imagesService.processImage(
      file,
      body.technique,
      body.format
    );
    return {
      message: "File uploaded and processed successfully",
      imageBase64: base64Image,
    };
  }
}

In the code above, we use the FileInterceptor to extract the uploaded file, while the technique and format are extracted from the request body with the @Body decorator. These parameters are then passed to the processImage method of ImagesService for processing.

Set Up Images Service

At the end of this section, your images.service.ts file should look like this:

Finished images.service.ts file

Update the images.service.ts file with the following:

import * as sharp from "sharp";
import { Injectable } from "@nestjs/common";

@Injectable()
export class ImagesService {
  async processImage(
    file: Express.Multer.File,
    technique: string,
    format: keyof sharp.FormatEnum
  ): Promise<Buffer> {
    try {
      const method = (this as any)[technique];
      if (typeof method !== "function") {
        throw new Error(
          `Method "${technique}" is not defined or not a function`
        );
      }
      return await method.call(this, file, format);
    } catch (error) {
      console.error(`Method "${technique}" is not defined or not a function`);
      throw new Error(
        `Failed to process image with technique "${technique}": ${error.message}`
      );
    }
  }
}

In the code above, the processImage method searches and calls a matching method in the ImagesService class based on the technique passed. This gives the flexibility to use different image manipulation methods with a single route based on the value of the technique in the request body.

Blurring an Image

Now that we have set up the processImage method, we can add any other method needed for image manipulation to the ImagesService class, and it can be used by passing its name as the value of the technique in the request body.

Let’s add the blur() method to blur an image:

async blur(file: Express.Multer.File, format: keyof sharp.FormatEnum) {
    const processedBuffer = await sharp(file.buffer)
        .resize(300, 300)
        .blur(10)
        .toFormat(format)
        .toBuffer()
    return `data:image/${format};base64,${processedBuffer.toString('base64')}`;
}

The code above takes an image’s buffer (the raw image data) and passes it to Sharp. The resize() method then resizes the image to 300x300 pixels. Next, the blur() method applies a blur effect with a strength of 9, although the method can take values of 0.3-1000.

Next, we convert the image to the specified format and generate the processed image as a buffer. Finally, the buffer is encoded to Base64 and sent as a data URL string.

The result when called is shown below:

The result of blurring the image

Rotating an Image

Next, we’ll add the rotate() method to rotate an image:

async rotate(file: Express.Multer.File, format: keyof sharp.FormatEnum) {
  const processedBuffer = await sharp(file.buffer)
      .resize(300, 300)
      .rotate(140, { background: "#ddd" })
      .toFormat(format)
      .toBuffer()
  return `data:image/${format};base64,${processedBuffer.toString('base64')}`;
}

The rotate() method takes the rotation angle and can also take a custom background color for non-90° angles.

When called, the result is:

The result of resizing and rotating the image

It is important to note that each transformation occurs on the image as it exists at that step, so if we had rotated the image before resizing it, we would have a different result, as shown below.

The result of rotating and resizing the image

Tinting an Image

Let’s add the tint() method to tint an image:

async tint(file: Express.Multer.File, format: keyof sharp.FormatEnum) {
    const processedBuffer = await sharp(file.buffer)
        .resize(300, 300)
        .tint({ r: 150, g: 27, b: 200 })
        .toFormat(format)
        .toBuffer()
    return `data:image/${format};base64,${processedBuffer.toString('base64')}`;
}

The tint() method changes the color of an image by applying a specified tint based on the red, green, and blue (RGB) values. The range for each value is 0-255.

When called, the result is:

The result of tinting the image

Converting Image to Grayscale

Next, let’s add the grayscale() method to convert an image to grayscale:

async grayscale(file: Express.Multer.File, format: keyof sharp.FormatEnum) {
  const processedBuffer = await sharp(file.buffer)
      .resize(300, 300)
      .grayscale() // or greyscale()
      .toFormat(format)
      .toBuffer()
  return `data:image/${format};base64,${processedBuffer.toString('base64')}`;
}

The grayscale() and greyscale() methods remove all the color information and represent the image using shades of gray.

When called, the result is:

The result of converting the image to grayscale

Flipping an Image

Let’s add the flip() method to flip an image:

async flip(file: Express.Multer.File, format: keyof sharp.FormatEnum) {
  const processedBuffer = await sharp(file.buffer)
      .resize(300, 300)
      .flip()
      .toFormat(format)
      .toBuffer()
  return `data:image/${format};base64,${processedBuffer.toString('base64')}`;
}

The flip() method vertically reverses an image. When called, the result is:

The result of flipping the image

Flopping an Image

We can use the flop() method to flop an image:

async flop(file: Express.Multer.File, format: keyof sharp.FormatEnum) {
  const processedBuffer = await sharp(file.buffer)
      .resize(300, 300)
      .flop()
      .toFormat(format)
      .toBuffer()
  return `data:image/${format};base64,${processedBuffer.toString('base64')}`;
}

This method horizontally reverses an image. When called, the result is:

The result of flopping the image

Cropping an Image

Next, let us add the crop() method to crop an image:

async crop(file: Express.Multer.File, format: keyof sharp.FormatEnum) {
  const processedBuffer = await sharp(file.buffer)
      .extract({ left: 140, width: 1800, height: 1800, top: 140 })
      .resize(300, 300)
      .toFormat(format)
      .toBuffer()
  return `data:image/${format};base64,${processedBuffer.toString('base64')}`;
}

The extract() method allows us to describe a box within the image to keep, cropping the rest.

  • left – The horizontal position where the box should start
  • width – The width of the box
  • top – The vertical position where the box should start
  • height – The height of the box

When called, the result is:

The result of cropping the image

As mentioned above, the order when chaining the Sharp methods is important. This is why we called extract() before resize()—otherwise, we would get a different result.

Creating a Composite Image

Finally, let’s add the createComposite method:

async createComposite(file: Express.Multer.File, format: keyof sharp.FormatEnum) {
  const greyHouse = await sharp(file.buffer)
      .resize(150, 150)
      .grayscale()
      .toBuffer()
  const processedBuffer = await sharp(file.buffer)
      .composite([
          {
              input: greyHouse,
              top: 50,
              left: 50,
          },
      ])
      .resize(300, 300)
      .toFormat(format)
      .toBuffer()
  return `data:image/${format};base64,${processedBuffer.toString('base64')}`;
  }

The composite() method takes an array of overlay objects containing input, top and left properties. It positions each overlay using the top and left properties.

In the code above, we create a small grayscale version of the image to use as an overlay. When called, the result is:

The result of creating a composite image

Starting Server

Now that we have completed the setup, we can start our server. Save all the files and start the NestJS server by running the following command:

  npm run start

You should see this:

Successful server start-up

To access our index.html page, navigate to http://localhost:3000/index.html.

Conclusion

Images play an important role on the web. Hence, optimizing them and using other image manipulation techniques with libraries like Sharp and backend frameworks like NestJS is important.

After building the image manipulation web app, you should be able to blur, rotate, tint, convert to grayscale, flip, flop, crop, create a composite image and convert images to a different format.

]]>
urn:uuid:ec0e533b-8cc0-4895-8628-39a3c7b538fc Blazor Basics: Lazy Load Assemblies to Boost the Performance of Blazor WebAssembly Learn how to implement lazy loading assemblies in Blazor WebAssembly to improve the app’s performance. 2025-02-20T14:03:19Z 2025-02-26T04:51:04Z Claudio Bernasconi Learn how to implement lazy loading assemblies in Blazor WebAssembly to improve the app’s performance.

The biggest challenge and criticism Blazor WebAssembly faces is its first load performance.

There are several performance optimizations you can apply to improve the overall performance of a Blazor WebAssembly application. However, depending on the application’s size, the WebAssembly file downloaded to the client can cause a noticeable delay.

In this article, I will show you how to implement lazy loading assemblies into a Blazor WebAssembly application. This approach allows you to shrink the main assembly’s size and load more code on demand.

You can access the code used in this example on GitHub.

Introduction to Lazy Loading Assemblies

With lazy loading, we can defer the loading of an assembly until the user navigates to a route that requires the assembly.

For example, we have a member section on the website that is only used by 5% of visitors.

A graphic showing the difference between using lazy loading and not using lazy loading. When using lazy loading, the total size of the application is split into multiple bundles.

With lazy loading, we can let the client download the website without the member section.

When the user navigates to the member section, for example, by clicking on a link in a navigation menu, we load the assembly containing the member section.

Creating a Standalone Blazor WebAssembly Application

First, we create a Blazor WebAssembly Standalone application. Lazy loading also works when hosting the application using an ASP.NET Core server project, but we want to focus on the client-side WebAssemby application.

I name the application BlazorWasmLazyLoading. This name is important because I will use the same naming scheme when creating and referencing the Razor Class Library.

Creating a Razor Class Library

The foundation of implementing lazy loading for a Blazor WebAssembly application is splitting the components into different projects during development. We must create a Razor Class library and move all the components we want to lazy load from the main Client project into the Razor Class library.

In this example, we want to implement a member section, which will only be loaded when the user navigates to the /members route.

We create a new Razor Class Library and name it BlazorWasmLazyLoading.Members. In this Razor Class Library, I add a Pages folder and the following Members page component:

@page "/members"

<h1>Member Login</h1>

<input type="text" placeholder="Username" />
<input type="password" placeholder="Password" />

It’s a simple component for demonstration purposes, which registers itself with the Routing system for the /members route and renders some HTML that stylizes a login form.

We would put all components we want to use inside the members section into this Razor Class Library.

Adding a Project Reference

Now that we have a Razor Class Library project inside the solution, we need to add a project reference from the BlazorWasmLazyLoading project to the BlazorWasmLazyLoading.Members project.

You can use the user interface in your editor/IDE of choice, or you can add the following XML definition directly inside the .csproj file:

<ItemGroup>
 <ProjectReference Include="..\BlazorWasmLazyLoadingMembers\BlazorWasmLazyLoading.Members.csproj" />
</ItemGroup>

Add the BlazorWebAssemblyLazyLoad Property to the Project File

We also need to set a Blazor specific property inside the .csproj file of the Blazor WebAssembly application.

In the BlazorWasmLazyLoading.csproj file, we add the following XML definition:

<ItemGroup>
 <BlazorWebAssemblyLazyLoad Include="BlazorWasmLazyLoading.Members.wasm" />
</ItemGroup>

It specifies that the BlazorWasmLazyLoading.Members.wasm file should not be loaded on startup, even though it is referenced using a project reference.

The .wasm file extension is used for complied WebAssembly code, and we need to make sure to append the file extension to the definition in the project file.

Implementing Lazy Loading in the Router

We now have different parts for using lazy loading in a Blazor WebAssembly application. We created a Razor Class Library, referenced it in the main WebAssembly project, and added the required BlazorWebAssemblyLazyLoad property to its project file.

We are now ready to implement lazy loading in the Router component. Yes, we must tell the Router where to find certain page components and what assemblies to load when the user navigates to a specific route.

First, we open App.razor file in the Blazor WebAssembly application project, which contains the Router definition. We inject two new types into the component and add using statements to their namespaces.

@using Microsoft.AspNetCore.Components.WebAssembly.Services
@using System.Reflection

@inject LazyAssemblyLoader AssemblyLoader
@inject ILogger<Program> Logger

First, we need an instance of the LazyAssemblyLoader class, which we will use to lazy load the assemblies, as you might guess from its name.

Next, we also want to inject an instance of the ILogger type that allows us to write a log statement in case an error occurs.

<Router AppAssembly="@typeof(App).Assembly" 
 AdditionalAssemblies="_lazyLoadedAssemblies"
 OnNavigateAsync="OnNavigateAsync">
 <!-- code omitted -->
</Router>

We add two new properties to the Router definition. First, we add the AdditionalAssemblies property and assign a private field, which we will create in code section shortly. Next, we add an OnNavigateAsync event handler to the OnNavigateAsync event property.

Now, we’re ready to implement the code section, which includes the logic that perform the lazy loading of the assemblies when required.

@code {
    private List<Assembly> _lazyLoadedAssemblies = new List<Assembly>();

    private async Task OnNavigateAsync(NavigationContext context)
    {
        try
        {
            if (context.Path == "members")
            {
                var assemblies = await AssemblyLoader.LoadAssembliesAsync(new[] { "BlazorWasmLazyLoading.Members.wasm" });
                _lazyLoadedAssemblies.AddRange(assemblies);
            }
        }
        catch (Exception ex)
        {
            Logger.LogError("Error: {Message}", ex.Message);
        }
    }
}

The OnNavigateAsync method is triggered when the user navigates from one page to another.

The NavigationContext object provides us with contextual information, such as the Path property containing the route the user tries to access.

We use a try-catch statement to catch any unforeseen errors and write a log statement in case an error occurs.

We check the Path whether it is equal to the members string. It means that the user tries to navigate to the /members route.

If that is the case, we use the AssemblyLoader property we injected at the top of the component and its LoadAssembliesAsync method to load the assembly. In this case, we want to load the code inside the BlazorWasmLazyLoading.Members project. Make sure to add the .wasm ending.

Last but not least, we add the loaded assemblies (you could load one or more assemblies) to the private _lazyLoadedAssemblies field, which we reference from the AdditionalAssemblies property on the Router component.

Testing Lazy Loading in a Blazor WebAssembly Application

When we run the application, the default route ("/") of the web application is loaded.

When we open the developer tools (make sure to disable the cache) and reload the page, we can see that the Members assembly hasn’t been loaded.

Google Chrome's developer tools showing the loaded website without the BlazorWasmLazyLoading.Members WebAssembly bundle.

When we navigate to the /members route, the Members assembly is downloaded to the client and the members page is loaded.

Google Chrome's developer tools showing the lazy loaded BlazorWasmLazyLoading.Members WebAssembly bundle.

How Lazy Loading Works Under the Hood

This takes us to the question of how lazy loading works in Blazor.

We use the LazyAssemblyLoader type, automatically registered with the dependency injection system at startup when using the WebAssemblyHostBuilder class in the Program.cs file.

The LazyAssemblyLoader type uses JavaScript Interoperability to fetch assemblies using an HTTP call from the server. It then loads the downloaded assemblies into the runtime executing on WebAssembly in the browser.

If there are any routable pages in the lazy loaded assembly, Blazor makes them available by registering the components with the Router.

Conclusion

With lazy loading, we can shrink the size of the main WebAssembly bundle downloaded to the client when the user visits the website. It can considerably reduce the WebAssembly bundle size and therefore the time until the website is loaded.

We get granular control over how we want to split the application into one or multiple lazy-loaded assemblies by splitting the components into different Razor Class Libraries.

In the App.razor file, we configure the Router to behave according to our application’s needs. We can lazy load one or multiple assemblies when the user navigates to a specific route.

Under the hood, the built-in LazyAssemblyLoader type uses JavaScript interoperability to fetch the WebAssembly bundles from the server on demand.

You can access the code used in this example on GitHub.

If you want to learn more about Blazor development, you can watch my free Blazor Crash Course on YouTube. And stay tuned to the Telerik blog for more Blazor Basics.

]]>
urn:uuid:6760ea25-5f6b-4ff6-a06a-0b33df790758 10 Essential ASP.NET Core Features to Remember Stuck in a development rut? These 10 ASP.NET Core features might inspire you to try a new approach! 2025-02-18T11:21:18Z 2025-02-26T04:51:04Z Assis Zang Stuck in a development rut? These 10 ASP.NET Core features might inspire you to try a new approach!

As web developers, it is common for us to keep turning to the same old solutions for different problems, especially when we are under pressure or dealing with tight deadlines.

This happens because we often follow code patterns that we have already mastered, either because we are unaware of other alternatives or even because we are afraid of using new options and things getting out of control. However, ASP.NET Core offers features that can make code cleaner and more efficient, and they are worth learning about.

In this post, we will explore some of these features. We will cover how features such as pattern matching, local functions and extension methods, among others, can be applied more effectively in different scenarios. You can access all the code examples covered during the post in this GitHub repository: ASP.NET Core Amazing features.

1. Pattern Matching

Pattern matching first appeared in C# 7 as a way to check the structure and values of objects more concisely and expressively. It has been continually improved since then.

In pattern matching, you test an expression to determine whether it has certain characteristics. The is and switch expressions are used to implement pattern matching.

1.1. Without Pattern Matching

public string GetTransactionDetails(object transaction)
{
    if (transaction == null)
    {
        return "Invalid transaction.";
    }

    if (transaction is Payment)
    {
        Payment payment = (Payment)transaction;
        return $"Processing payment of {payment.Amount:C} to {payment.Payee}";
    }
    else if (transaction is Transfer)
    {
        Transfer transfer = (Transfer)transaction;
        return $"Transferring {transfer.Amount:C} from {transfer.FromAccount} to {transfer.ToAccount}";
    }
    else if (transaction is Refund)
    {
        Refund refund = (Refund)transaction;
        return $"Processing refund of {refund.Amount:C} to {refund.Customer}";
    }
    else
    {
        return "Unknown transaction type.";
    }
}

1.2. Using Pattern Matching

// Using pattern matching - Switch expression
 public string GetTransactionDetailsUsingPatternMatching(object transaction) => transaction switch
 {
     null => "Invalid transaction.",
     Payment { Amount: var amount, Payee: var payee } => 
         $"Processing payment of {amount:C} to {payee}",
     Transfer { Amount: var amount, FromAccount: var fromAccount, ToAccount: var toAccount } =>
         $"Transferring {amount:C} from {fromAccount} to {toAccount}",
     Refund { Amount: var amount, Customer: var customer } => 
         $"Processing refund of {amount:C} to {customer}",
     _ => "Unknown transaction type."
 };

1.3. Using Pattern Matching: is Explicit Expression

   public void ValidateObject()
    {
        object obj = "Hello, world!";

        if (obj is string s)
        {
            Console.WriteLine($"The string is: {s}");
        }
    }

Note that in example 1.1, without pattern matching, despite using the is operator to check the transaction type, there is a chain of if and else statements. This makes the code very long. Imagine if there were more transaction options—the method could become huge.

In example 1.2, we use the switch operator to check the transaction type, which makes the code much simpler and cleaner.

In example 1.3, we use the is expression explicitly to check whether obj is of type string. In addition, is string s performs a type-check and initializes the variable s as the value of obj converted to a string, if the check is true. This way, in addition to checking the type, we can convert this value to the checked type.

2. Static Methods

Static methods are methods associated with the class to which they belong, and not with specific instances of the class. In other words, unlike non-static methods, you can call them directly using the class name, without having to create a new instance of it.

The best-known extension methods are the LINQ query operators that add query functionality. But in addition to LINQ query methods, we can create our own static methods that, in addition to keeping the code cleaner and simpler, can be shared with other system modules. Furthermore, they are more efficient than non-static methods, since they do not require instance management.

See below the same method declared and called statically and non-statically.

2.1. Non-static Method

public class BankAccount
 {
     public double Balance { get; set; }
     public double InterestRate { get; set; }

     //Non-static method
     public BankAccount(double balance, double interestRate)
     {
         Balance = balance;
         InterestRate = interestRate;
     }

     public double CalculateInterest()
     {
         return Balance * (InterestRate / 100);
     }
 }

// Using non-static-method
 BankAccount account = new BankAccount(1000, 5);
 double interest = account.CalculateInterest();
 Console.WriteLine($"Interest earned: ${interest}");

2.1. Static Method

public static class BankAccountUtility
{
    //Static method
    public static double CalculateInterest(double balance, double interestRate)
    {
        return balance * (interestRate / 100);
    }
}

// Using static method
double interestStatic = BankAccountUtility.CalculateInterest(1000, 5);
Console.WriteLine($"Interest earned: ${interestStatic}");

Note that in example 2.1 we created the class in a non-static way and with properties, and we also created the non-static method. Therefore, when we called the method, it was necessary to create an instance of the class to invoke the method.

In example 2.2, we created the class and method in a non-static way, which allowed the method to be called without the need to instantiate the class. We simplified things even further by not creating properties for the class.

In this way, the static approach removes the coupling between the interest calculation and the BankAccount object. This is useful because the interest calculation in this scenario is a generic operation that does not need to be tied to an object and can be reused in different contexts, without depending on the structure or state of a class.

Another advantage of the static approach is that potential bugs are minimized, as there is no manipulation of instances of the BankAccount class—that is, there is no change in state for the objects.

3. Tuples

Tuples are a data structure that allows you to store values of different types, such as string and int, at the same time without the need to create a specific class for this.

Note the example below:

class NameParts
{
    public string FirstName { get; }
    public string LastName { get; }

    public NameParts(string firstName, string lastName)
    {
        FirstName = firstName;
        LastName = lastName;
    }
}

static NameParts ExtractNameParts(string fullName)
    {
        var parts = fullName.Split(' ');
        string firstName = parts[0];
        string lastName = parts.Length > 1 ? parts[1] : string.Empty;
        return new NameParts(firstName, lastName);
    }

string fullName = "John Doe";
        var nameParts = ExtractNameParts(fullName);
        Console.WriteLine($"First Name: {nameParts.FirstName}, Last Name: {nameParts.LastName}");

Here, we declare the NameParts class that has the FirstName and LastName properties to store the value obtained in the ExtractNameParts(string fullName) method. There is also a method to display the values found.

In this case, we use a class and properties only to transport the data. But we could simplify this by using a tuple. Now see the same example using a tuple:

   //Now the method returns a tuple
   static (string FirstName, string LastName) ExtractNamePartsTuple(string fullName)
    {
        var parts = fullName.Split(' ');
        string firstName = parts[0];
        string lastName = parts.Length > 1 ? parts[1] : string.Empty;
        return (firstName, lastName);
    }

    public void PrintNameTuple()
    {
        string fullName = "John Doe";
        var nameParts = ExtractNamePartsTuple(fullName);
        Console.WriteLine($"First Name: {nameParts.FirstName}, Last Name: {nameParts.LastName}");
    }

The above example does not use a class to store the value of FirstName and LastName. Instead, a tuple is used to return the values of ExtractNamePartsTuple(string fullName) by just declaring the code (string FirstName, string LastName).

Using tuples allows the developer to keep the code straightforward, as it avoids the creation of extra classes to transport data. However, in scenarios with a certain level of complexity, it is recommended to use classes to establish the maintainability and comprehensibility of the code. Furthermore, when using tuples, it is important to give meaningful names to the values stored in them. This way the developer makes the meaning of each element of the tuple explicit.

4. Expression-Bodied Members

Expression body definitions are a way to implement members (methods, properties, indexers or operators) through expressions instead of code blocks using {} in scenarios where the member body is simple and contains only one expression.

In Example 4.1 you can see a method using the traditional block approach, while 4.2 shows the same method using the expression-bodied members approach:

4.1. Block Approach

 public int TraditionalSum(int x, int y)
  {
      return x + y;
  }

4.2. Expression-Bodied Members Approach

   public int Sum(int x, int y) => x + y;

Note that the method that uses expression-bodied members is simpler than the traditional approach, as it does not require the creation of blocks or the return expression, which leaves the method with just a single line of code.

4.3. Properties Using Expression-Bodied Members Approach

Note in the example below that it is also possible to use the expression-bodied members approach in class properties, where the get property and the set accessors are implemented.

public class User
{
    // Expression-bodied properties
    private string userName;
    private User(string name) => Name = name;

    public string Name
    {
        get => userName;
        set => userName = value;
    }
}

Just like pattern matching, using expression-bodied members allows you to write simpler code, saving lines of code, and fits well in scenarios that don’t require complexity.

5. Scoped Namespaces

Scoped namespaces are useful for files that have only a single namespace, typically model classes.

Note the examples below. Example 5.1 shows the traditional namespace declaration format using curly braces, while Example 5.2 shows the same example but using the file-scoped namespace format, notice the semicolon and the absence of curly braces:

5.1. Traditional Namespace

namespace AmazingFeatures.Models
{
    public class Address
    {
        public string AddressName { get; set; }
    }
}

5.2. File-scoped Namespace

namespace AmazingFeatures.Models;

public class Address
{
    public string AddressName { get; set; }
}

6. Records

Records are classes or structs that provide distinct syntax and behaviors for creating data models.

Records are useful as a replacement for classes or structs when you need to define a data model that relies on value equality—that is, when two variables of a record type are equal only if their types match and all property and field values are equal.

In addition, records can be used to define a type for which objects are immutable. An immutable type prevents you from changing any property or field value of an object after it has been instantiated.

The examples below show a common class used to create a model, followed by the same example in record format.

6.1. Model Class

   public class Product
    {
        public string Name { get; set; }
        public decimal Price { get; set; }
        public string Category { get; set; }

        public Product(string name, decimal price, string category)
        {
            Name = name;
            Price = price;
            Category = category;
        }
    }

// Using mutable class
var product1 = new Product("Laptop", 1500.00m, "");
var product2 = new Product("", 1500.00m, "Electronics");
product1.Category = "Electronics";
product2.Name = "Laptop";

// Class object comparison (by reference)
Console.WriteLine(product1 == product2); // False (comparison by reference);
Console.WriteLine(product1.Equals(product2)); // False (no value equality logic);

6.2. Model Record

public record RecordProduct(string Name, decimal Price, string Category);

// Using immutable record
var recordProduct1 = new RecordProduct("Laptop", 1500.00m, "Electronics");
var recordProduct2 = new RecordProduct("Laptop", 1500.00m, "Electronics");

// Record comparison (by value, native)
Console.WriteLine(recordProduct1 == recordProduct2); // True (comparison by value);
Console.WriteLine(recordProduct1.Equals(recordProduct2)); // True (comparison by value);

In example 6.1, the properties of the Product class (Name, Price and Category) are freely assigned and changed, since the class is mutable by default.

Note that when the == operator is used to compare two instances of Product, the result is False, even though all the property values are identical. This is because, in classes, the == operator compares only the memory references of the objects, and not the values of their properties, in the same way as the Equals method.

Example 6.2 uses a record to represent the product. Unlike the class, the record is immutable by default—that is, the property values are defined at the time of creation and cannot be changed later. This immutability makes records an ideal choice for representing data that does not need to be modified, so they are consistent and predictable.

Another point to note is the comparison of objects. Records have comparison by value natively. This means that the == operator and the Equals method compare the values of all the object’s properties rather than their references in memory. In Example 6.2, recordProduct1 and recordProduct2 have the same values for all their properties, which causes both comparisons to return True. This value-based comparison is useful for scenarios where the object’s content is more important than its reference, such as reading and writing data.

7. Delegate Func

Delegate is a type that represents a reference to a method, like a pointer. Func is a native .NET generic delegate that can be used to represent methods that return a value and can receive input parameters.

The example below demonstrates creating a delegate manually.

7.1. Manual Delegate

// Explicit definition of the delegate
public delegate int SumDelegate(int a, int b);

// Using delegate
static void UsingDelegate()
{
    // Method reference
    SumDelegate sum = SumMethod;

    Console.WriteLine(sum(3, 4)); // Displays 7
}

// Method associated with the delegate
static int SumMethod(int a, int b)
{
    return a + b;
}

Note that in example 7.1 a delegate is declared to represent a method that performs the sum of two integers.

SumDelegate is declared explicitly. It represents any method that accepts two integer parameters (int a and int b) and returns an integer value. The delegate functions as a contract, specifying the signature of methods that can be assigned to it. The SumMethod method fulfills the requirements of the signature defined by the SumDelegate delegate: two integer parameters as input and one integer as output, and performs the sum of the two numbers provided.

In the UsingDelegate function, an instance of the SumDelegate delegate is created and associated with the SumMethod method. Thus, when calling the delegate (sum(3, 4)), it redirects the call to SumMethod with the parameters provided, performing the sum and returning the result. Although it works, this approach requires the explicit definition of the delegate (public delegate int SumDelegate(int a, int b)), which makes the code longer and less efficient, especially for simple tasks like adding two numbers.

An alternative to this could be to create a delegate func. See the example below.

7.2. Delegate Func

// Using func delegate
Func<int, int, int> sum = (a, b) => a + b;

public void UsingFuncDelegate()
{
    Console.WriteLine(sum(3, 4)); // Displays 7
}

Note that we are now using the generic delegate Func to represent anonymous methods with input parameters and return values—that is, the return of the expression (a, b) => a + b;.

Func is a generic type that can represent methods with up to 16 parameters and a return type.

In the example above, the first two generic types passed to Func are two integers, a and b. The third int represents the return type of the method. The anonymous method is defined using a lambda expression (a, b) => a + b, which indicates that the method receives two integers as input and returns the sum of these two integers.

Consider using delegate func to create cleaner and more flexible code. Furthermore, delegate func allows the creation of generic and dynamic functions that can be passed as parameters, stored in variables and combined to perform complex operations, but in a way that makes the code easier to understand and read.

8. Global Using

Global using is a feature that has been available since C# 10 and its principle is to reduce boilerplate and make code simpler, declaring the using directive only once and sharing it throughout the application.

Note the approach below using the traditional directive form:

8.1. Traditional Using Directive

using AmazingFeatures.Models;
using Microsoft.EntityFrameworkCore;
using Microsoft.EntityFrameworkCore.Infrastructure;
using Microsoft.EntityFrameworkCore.Storage;

namespace AmazingFeatures.Data;
public class AmazingHelper
{
    private readonly AmazingContext _amazingContext;

    public AmazingHelper(AmazingContext amazingContext)
    {
        _amazingContext = amazingContext;
    }

    public static void MigrationInitialisation(IApplicationBuilder app)
    {
        using (var serviceScope = app.ApplicationServices.CreateScope())
        {
            var context = serviceScope.ServiceProvider.GetRequiredService<AmazingContext>();

            if (!context.Database.GetService<IRelationalDatabaseCreator>().Exists())
            {
                context.Database.Migrate();
            }
        }
    }

    public List<Product> GetProductsWithPrice() =>
        _amazingContext.Products.Where(p => p.Price > 0).ToList(); 
}

Note that in the class above we declared four usings, three of which are from EntityFrameworkCore and one from the Product model class.

Imagine that this class grows exponentially, using other classes from other namespaces. This would leave the beginning of the class with dozens of usings. Thus, with the global using feature, we can eliminate all using directives from this class.

To do this, simply create a class with a name like GlobalUsings.cs or Globals.cs and place the directives you want to share in it:

8.2. Global Using Directive

global using AmazingFeatures.Models;
global using Microsoft.EntityFrameworkCore;
global using Microsoft.EntityFrameworkCore.Infrastructure;
global using Microsoft.EntityFrameworkCore.Storage;

Note that, in this case, it is not necessary to declare a name for this class, nor even a namespace for it. It must only contain the global using directives, with the reserved word global before each directive.

Now, we can declare the AmazingHelper class without any using directive, because the compiler understands that the necessary usings have already been declared globally and are shared by the application.

namespace AmazingFeatures.Data;
public class AmazingHelper
{
    private readonly AmazingContext _amazingContext;

    public AmazingHelper(AmazingContext amazingContext)
    {
        _amazingContext = amazingContext;
    }

    public static void MigrationInitialisation(IApplicationBuilder app)
    {
        using (var serviceScope = app.ApplicationServices.CreateScope())
        {
            var context = serviceScope.ServiceProvider.GetRequiredService<AmazingContext>();

            if (!context.Database.GetService<IRelationalDatabaseCreator>().Exists())
            {
                context.Database.Migrate();
            }
        }
    }

    public List<Product> GetProductsWithPrice() =>
        _amazingContext.Products.Where(p => p.Price > 0).ToList(); 
}

Implementing global usings is helpful in scenarios where there are classes with many usings directives, such as service classes, where resources from different sources are regularly incorporated. With global using directives, it is possible to eliminate many lines of code from different files since the entire application can share global resources.

9. Data Annotations

The Data Annotations resource consists of a set of attributes from the System.ComponentModel.DataAnnotations namespace. They are used to apply validations, define behaviors and specify data types on model classes and properties.

Let’s check below an example where it is possible to eliminate a validation method with just two data annotations.

9.1. Using a Validation Method

public class Product
{
    public string Name { get; set; }

    public List<string> Validate()
    {
        var errors = new List<string>();

        if (string.IsNullOrEmpty(Name))
        {
            errors.Add("The name is required.");
        }
        else if (Name.Length > 100)
        {
            errors.Add("The name must be at most 100 characters long.");
        }

        return errors;
    }
}

// Using the validation method
 var product = new Product { Name = "", Category = "keyboard", Price = 100  };
 var errors = product.Validate();
 if (errors.Any())
 {
     foreach (var error in errors)
     {
         Console.WriteLine(error);
     }
 }

Note that in the approach above it is necessary to create a method in the Product class to validate whether the Name property is null or empty and whether it has more than 100 characters.

9.2. Using Data Annotations

using System.ComponentModel.DataAnnotations;

public class ProductDataAnnotation
{
    [Required(ErrorMessage = "The name is required.")]
    [StringLength(100, ErrorMessage = "The name must be at most 100 characters long.")]
    public string Name { get; set; }
}

// Using data anotations
var productDataAnnotation = new ProductDataAnnotation { Name = "" };
var validationResults = new List<ValidationResult>();
var validationContext = new ValidationContext(product);

if (!Validator.TryValidateObject(product, validationContext, validationResults, true))
{
    foreach (var validationResult in validationResults)
    {
        Console.WriteLine(validationResult.ErrorMessage);
    }
}

In the above approach with data annotations instead of a validation method, we just add the data annotations above the Name property. Then in the service class, we use the method for validation. This approach reduces the use of manual validations with expressions like if and else for each of the properties present in the model class. You only need to add the data annotations and validate them once.

10. Generics

Generics allow you to define classes, interfaces, methods and structures that can use generic data types. This means that the data type to be used is specified only at the time the class, method or interface is instantiated or used. In this way, they make your code more flexible, reusable and type-safe compared to generic types such as objects, which require manual casting.

Below are two examples. The first one is without the use of generics and the second one uses generics.

10.1. Without Generics

public int FindMaxInt(List<int> numbers)
{
    return numbers.Max();
}

public double FindMaxDouble(List<double> numbers)
{
    return numbers.Max();
}

var maxInt = FindMaxInt(new List<int> { 1, 2, 3 });
var maxDouble = FindMaxDouble(new List<double> { 1.1, 2.2, 3.3 });

10.2. Using Generics

    public T FindMax<T>(List<T> items) where T : IComparable<T>
    {
        return items.Max();
    }

    public void UsingGenerics()
    {
        var maxInt = FindMax(new List<int> { 1, 2, 3 });
        var maxDouble = FindMax(new List<double> { 1.1, 2.2, 3.3 });
    }

Note that in the first approach without generics, the code implements two methods: one to get the maximum value from a list of integers, and another to get the maximum value from a list of double types.

In the second example, a single method is implemented that expects a list of T—that is, a generic type instead of a specific type such as int or double as in the previous example.

Conclusion

Despite the safety traditional approaches bring when implementing new code, it is important for web developers to consider alternatives that may be more efficient, depending on the scenario.

In this post, we covered 10 ASP.NET Core features that fit well in different situations. So, whenever the opportunity arises, consider using some of these features to further improve your code.

]]>
urn:uuid:7f8673c2-aea6-45a5-a121-217816cea5a5 Angular Basics: DevTools The Angular DevTools browser extension enhances debugging and profiling specifically in Angular applications. See what features to get started with. 2025-02-17T11:27:01Z 2025-02-26T04:51:04Z Hassan Djirdeh The Angular DevTools browser extension enhances debugging and profiling specifically in Angular applications. See what features to get started with.

Angular, a framework developed and maintained by Google, is one of today’s most popular tools for building web applications. Its powerful features, such as two-way data binding, dependency injection and modular architecture, make it an excellent choice for crafting dynamic web applications.

In this article, we’ll explore Angular DevTools, a dedicated browser extension designed to enhance debugging and profiling capabilities for Angular applications.

DevTools

In modern web development, DevTools (short for Developer Tools) are essential utilities that allow developers to inspect, debug and optimize their web applications directly from the browser. Most modern browsers, including Chrome and Firefox, have built-in developer tools for debugging JavaScript, analyzing network activity, inspecting the DOM and monitoring performance metrics. These tools help with the understanding of how applications behave under the hood.

chrome-devtools

Specialized DevTools extensions, such as React DevTools or Vue DevTools, take this further by offering framework-specific insights. They allow developers to navigate the structure of their applications, analyze state changes and identify performance bottlenecks.

Angular DevTools is a framework-specific extension that provides an Angular-centric debugging and profiling experience tailored to the unique architecture of Angular applications.

Angular DevTools

Angular DevTools is a browser extension providing an Angular-specific debugging and profiling interface within our developer tools. It integrates seamlessly with Chrome and Firefox, offering a tailored experience for inspecting and optimizing Angular applications. With Angular DevTools, developers can dive deep into the component structure, debug change detection mechanisms and analyze application performance to identify bottlenecks.

To get started with Angular DevTools, install it from the Chrome Web Store or Firefox Add-ons. Once installed, we can open the browser’s developer tools by pressing F12 (Windows/Linux) or Cmd+Option+I (Mac). If the application we’re inspecting is built with Angular, we’ll find an Angular-specific tab in the developer tools interface.

Angular tab in devtools

Let’s take a deeper look into some key functionalities of Angular DevTools.

Components

The Components tab is one of the more powerful features of Angular DevTools. When we open an Angular application in the browser, this tab provides a detailed view of the app’s structure. It displays a tree of components and directives that represent the hierarchy of the application.

We can click on any component in this tree to inspect its properties and metadata on the right-hand side. For instance, we can examine @Input and @Output properties, making it easy to understand the data flow in our application. Angular DevTools also allows us to search for specific components by their names using the search box above the component tree.

Search bar has suggestions Learn Angular and Build an Angular application

If we need to dive deeper into the implementation, we can navigate to the host DOM node of a selected component or directive by double-clicking it. This will redirect us to the Elements tab of our browser’s developer tools. Similarly, we can view the source code of a component by clicking the icon in the top-right corner of the properties panel.

Debugging Properties and Interactions

Angular DevTools doesn’t just let us observe our application; it lets us interact with it as well. In the Components tab, we can edit property values directly. For example, if a component has an input property controlling its behavior, we can update this value in real time to see how it affects the application.

Todo has task Buy milk

Additionally, Angular DevTools integrates with the browser’s console for direct interaction with selected components or directives. By typing $ng0, we can access the most recently selected component instance. Previous instances can be accessed using $ng1, $ng2 and so on. This feature is especially useful for testing methods and properties during runtime.

Console tab open

Profiler

The Profiler tab is essential for understanding how our Angular application performs during change detection cycles. This tab allows us to record and visualize our app’s performance, identifying areas where improvements can be made.

When we start recording, the Profiler tracks change detection events and displays them as a bar chart. Each bar represents a single change detection cycle, with its height indicating the time spent. Clicking on a bar provides further insights, including which components or directives took the most time to render.

Angular DevTools Profiler

For a more detailed view, Angular DevTools offers a flame graph representation. The flame graph shows the hierarchy of components and how much time each consumed during a change detection cycle. Components that took longer to render appear with more intense colors, helping us quickly identify potential bottlenecks.

Angular Profiler flame graph shows mostly yellow with a spot of red on a nested component

Injector Tree

The Injector Tree feature visualizes our dependency injection hierarchy for Angular applications built with version 17 or higher. This feature is invaluable when debugging services, providers or dependency resolution paths.

The Injector Tree displays two views: the environment hierarchy and the element hierarchy, each representing how dependencies are resolved in an application. By selecting an injector, we can see all the providers it contains and the resolution path Angular follows to fulfill dependencies. This makes identifying and fixing misconfigurations in our dependency injection setup easier.

Injector Tree displaying the environment hierarchy and the element hierarchy

Wrap-up

Angular DevTools is a practical extension for developers working with Angular applications. Its features, such as the Components tab, Profiler tab and Injector Tree, provide valuable insights into application structure, performance and dependency resolution. These tools make it easier to debug, optimize and better understand the behavior of Angular applications, enabling developers to address issues efficiently and refine their codebase effectively.

In this article, we’ve highlighted some of the core functionalities of Angular DevTools, but there’s more to explore. For additional details and a complete overview of what Angular DevTools offers, visit the official Angular DevTools documentation, which also served as the reference for the images used in this article.

]]>
urn:uuid:3cb3665e-8fd4-4ef7-ba57-1123ac930a1e Coding Azure 3: Creating an App Service for a Web Service Your cloud-native app is going to need a Web Service. Here’s how to configure an App Service to hold that Web Service (and how to deploy it to the App Service from Visual Studio). 2025-02-13T11:11:11Z 2025-02-26T04:51:04Z Peter Vogel Your cloud-native app is going to need a Web Service. Here’s how to configure an App Service to hold that Web Service (and how to deploy it to the App Service from Visual Studio).

In my previous post, Coding Azure 2: Configuring an Azure SQL Database, I set up a cloud-native Azure SQL database and loaded it with some data. The next step in creating a three-tier microservice is to add a Web Service that will access that database. For a cloud-native solution, that Web Service will run inside an Azure App Service.

I should mention that the Azure team seems to use the two terms “App Service” and “Web App” interchangeably (or I can’t figure out the distinction). I’ll be using “App Service” except where the UI forces me to use “Web API.”

Creating the App Service

To start creating an App Service that will host a Web Service in Azure, surf to the Azure portal, type App Services in the search box in the middle of the top of the page, and click on App Services in the dropdown list to get to the App Services page. Then click on the + Create at the left end of the menu at the top of the page and select Web App to start the Create Web App wizard.

As usual, you’ll need to assign your App Service to a resource group and give it a name (the App Service’s name gets rolled into a URL so you can’t use spaces but, for an App Service, the UI will let you use uppercase letters—not that they make any difference). By default, the Wizard will automatically tack a random string at the end of the name to so the resulting URL is unique, but you can turn that off using the toggle switch right below your App Service’s name (I called my service warehouseappdb, for example, which turned out to be unique so I turned off the random string option).

The next group of settings will depend on what development platform you’re using and how you intend to deploy your application.

The first option (Code) lets you pick between loading the support stack for some development platform or a container (Container, which lets you, eventually, load a container holding the code for your Web Service and its support stack).

I don’t intend to load a container to this App Service, so I set the Code/Container radio button to Code. Picking the Code option displays the Runtime stack dropdown that lets you pick the support packages that will be loaded into my App Service. I picked .NET 8 because it matched the stack I’ll be using to create my Web Service.

If you pick the Container option, you won’t have to select a runtime stack. That will increase your costs (see below) and you’ll have to give up using remote debugging (which I’ll cover in an upcoming post). You’ll also have to configure your Visual Studio/Visual Studio Code project to generate your application in a container.

Regardless of which choice you made in the Code/Container option, you’ll need to pick your App Service’s operating system. Because I used the Code option, I picked Windows purely for practical purposes: While .NET runs on both Windows and Linux, I can’t enable remote debugging on an App Service that uses Linux unless I’m using .NET 6 or earlier. It also reflects the platform that I’m developing my Web Service on.

If you pick the Container option, then your operating system choice should be driven by the platform you’re developing on, but Visual Studio, at least, will let you develop on Windows and deploy to a Linux host.

I set the Region to the same region as the Azure SQL database I created in Part 2 to avoid any cross-region charges (Canada Central, in my case).

Following those selections are the ones that let you set how much processing power you’ll get (and, as a result, how much you’ll pay). Under Pricing Plan I clicked on Create New and, in the resulting, dialog, gave my plan a name.

Clicking Create New not only lets you set a name but also displays a dropdown list of available plans. For this sample application (and using the Code option), I selected the Free F1 plan which gives me 60 processing minutes a day to play with. In real life, you’ll want to pick something with more power. If you picked the Container option, the Free option isn’t available and you’ll have to pick from more powerful (and expensive) plans.

Picking the Free plan means that I can’t select Zone Redundancy (which will create failover copies of my App Service in another datacenter in the same region) or additional services that would automatically be deployed with my App Service. Those extra services include an Azure SQL database and a Redis cache (I’ll look at using a Redis cache later in this series). Picking Container will also let you select Zone redundancy which will copy your App Service to another data center in the same region enabling you to fallback to that copy if you have a data center failure.

On the other tabs:

  • Container tab:

    • This will only appear if you selected the Container option earlier on. If you want to use an image from your own registry, select the appropriate source from the Image source (Azure Container Registry, Docker Hub or a private registry) and then pick your image from the registry.
  • Deployment tab:

    • This will only appear if you picked the Code option. The free plan that I picked doesn’t support enabling Continuous deployment or integration with GitHub. If you’re keeping your source code in GitHub, you can enter your repo’s information here and review the workflow that will deploy your branch (I’m going to deploy my sample app straight from Visual Studio so I didn’t mind losing these options).

    • I left Basic authentication disabled just to simplify deploying from Visual Studio.

  • Networking tab:

    • I left Enable public access set to On so that I can test my service just by surfing to it. You should too, to facilitate creating your app. In real life you’ll eventually want to disable this in your production environment so that the service can only be accessed from the application’s frontend.

    • The free service plan doesn’t let me Enable network injection, which would let me attach my service to a Virtual Network and take advantage of the security features of a VNet. I didn’t miss that because, in focusing on a cloud-native solution, I wanted to secure my app without using features that smacked of a physical data center.

  • Monitoring + Secure Tab:

    • I disabled Application Insights because I didn’t want to incur any associated charges. In real life, you’ll probably (eventually) want to enable Application Insights in your development, test and production App Services, but only when you have a problem to track down.

    • I didn’t enable Microsoft Defender for the same reason that I didn’t enable Application Insight, but you should almost certainly enable it for your production App Service.

Deploying to Your App Service from Visual Studio

To deploy your Web Service from Visual Studio, you need to create a Publish profile (in a later post, I’ll cover how to deploy from Visual Studio Code). To start creating a Publish profile:

  1. First, start Visual Studio and create a Visual Studio project (I used the ASP.NET Core Web API template, called my project WarehouseMgmtService, and—other than setting my Framework to .NET 8.0—took all the defaults on the Additional information page).

  2. From Visual Studio’s Build menu, select Publish <project name> to display the Publish tab.

  3. If this is your first profile from that dialog’s Target list, click on the Azure choice to select it. (If this isn’t your project’s first profile, then, from the menu across the top of the panel on the right, click on + Create tab and then select Azure from the Target list.)

  4. Click the Next button at the bottom of the dialog to display the Specific Target tab.

  5. On the Specific Target tab, you need to select a target that matches the App Service you created. In my case, because I selected the Code option when configuring my App Service, I selected the Azure App Service (Windows) target. (If you chose to use a container with your App Service, then you’ll want to select App Service Container for the platform—Windows or Linux—that you selected.)

  6. Click the Next button to move to the App Service page.

  7. On the App Service tab, you’ll need to drill down to which Azure App Service you want to deploy your Web Service to (the list is organized either by resource group or Web Apps and limited to services that match the template you selected on the previous tab). In my case, the App Service I want is my WarehouseMgmtService in my WarehouseMgmt resource group.

  8. Click the Next button at the bottom of the dialog to go to the API Management page.

  9. Right now, I’m not going to use API Management, so I checked the Skip this step checkbox at the bottom of the dialog.

  10. Click the Finish button to create your Publish profile and the Close button to return to Visual Studio and display your profile.

If you want to start using API Management, then, before clicking the Finish button, you can either select an existing Web API resource from the list or click the + Create new button to create a Web API. (If you picked the Container option, you may get a message about increasing your admin privileges on your registry.)

Before you use the profile though, let’s recognize that you’ll be needing to debug your Web Service (nothing ever works the first time). To support that, modify your Publish profile to deploy a debug build: In your publish profile, click on the pencil icon beside the Configuration’s Release setting and, in the following dialog, set the Configuration dropdown list to Debug before clicking the Save button.

In the long run, you’ll probably want two publish profiles: One that deploys to a “development” App Service with a debug build and a second profile that deploys to a staging/testing App Service (or a deployment slot in a product App Service) with a release build.

Deploy Your Web Service

Now that you’ve created a Publish profile, you can use it: Back in Visual Studio, with your profile displayed, click that profile’s Publish button to deploy the skeleton of your Web Service to your App Service.

Your app will be deployed to your service and, if your App Service is running, Visual Studio will open a browser window and request the service at your App Service’s URL. That will generate an error page with an HTTP 404 error … which is sort of disappointing.

That initial error 404 occurs because the publish process requests the Web Service using the App Service’s base URL (https://<app service>.azurewebsites.net/) and the template’s base service is (currently) at https://<app service>.azurewebsites.net/weatherforecast. Change the URL to the default service specified in your Program.cs file and you should get back your default output.

Resetting an App Service

In general, you should find that, as you make modifications to your application, you can repeatedly deploy it into its App Service without a problem. However, it’s possible to make sufficient changes to render your App Service unusable (you’ll get the message “HTTP Error 500.30 - ASP.NET Core app failed to start”). It’s not easy but, for example, in an App Service using .NET App Services, replacing an ASP.NET Core app configured for Razor Pages to one configured for MVC/controllers-with-views will sometimes do it.

For an App Service running a .NET app, you can fix the problem by deleting all the files in the App Service:

  1. In your App Service’s menu on the left, expand the Development Tools node.

  2. Select the Console node to open a console window on the right, with the prompt set to C:\home\site\wwwroot.

  3. Delete all the files in the folder with:

del *.* /s /q
  1. Republish your app to the App Service.

Next Steps

In a later post in this series, I’ve got the steps for deploying to an App Service from Visual Studio Code (feel free to skip ahead once that’s published). In my next post, however, I’m going to cover how to create a Web Service that will be able to talk to my Azure SQL database. Keep coming back, as they say.

]]>
urn:uuid:da410a2e-0a9d-4f37-b78a-1caad7f8d1e5 The Telerik and Kendo UI 2025 Q1 Release Is Here—See What’s New! See what’s new in 2025 Q1 release for Progress Telerik and Kendo UI libraries—expanded design system tooling, modernization options, productivity boosts and more. 2025-02-12T14:53:02Z 2025-02-26T04:51:04Z Iva Borisova See what’s new in 2025 Q1 release for Progress Telerik and Kendo UI libraries—expanded design system tooling, modernization options, productivity boosts and more.

The first Progress Telerik and Kendo UI release for 2025 brings new time-saving features and components and two important technical announcements: a new licensing process and the end of life for Telerik UI for Xamarin.

This article contains three important sections:

Release Highlights

The 2025 Q1 release is here, bringing powerful updates that continue to enhance the design-to-development workflows, modernize legacy projects, accelerate app development and deliver rich, data-driven experiences. With new Building Blocks, AI Usage Monitoring Dashboard template, UI components like DockManager, Chart Wizard and OTP (one-time password) Input, accompanied by feature-rich existing components, developers like you will find the solutions to all emerging problems. Get ready to build faster, smarter and more adaptable applications!

Expanded Design System Tooling

many screens with varying designs

  • More Building Blocks: 12 new Building Blocks such as Dashboard cards, AI App Welcome screen, AI-powered text editor and conversation source are added to the already robust collection, further facilitating your development experience.
  • AI Usage Monitoring dashboard template: A new dashboard template enriches the Progress Page Templates collection. It is designed as a centralized interface for providing real-time insights into how AI is being used.
  • Progress ThemeBuilder enhancements: Accessibility and performance improvements are made to the powerful ThemeBuilder app, for a more seamless experience while designing your apps.
  • Support for CSS variables in Charts across Telerik and Kendo UI: Customize Chart styles without modifying the core code. Effortlessly adjust colors, fonts and sizes across multiple charts with a single variable change.

Modernization of Legacy Projects

list of members and individual biographies

  • Desktop-to-web migration: Recreate desktop-like experiences in your Blazor applications with Telerik UI for Blazor DockManager. It replicates docks, along with their behaviors, for smooth desktop-to-web migration.
  • Xamarin to .NET MAUI migration: Effective February 19, 2025, Progress discontinues Telerik UI for Xamarin. Since Microsoft’s retirement of Xamarin in May 2024, we’ve strived for full featured parity in Telerik UI for .NET MAUI, while adding new controls, features and enhancements with every release, like the new Light and Dark Platform themes in 2025 Q1.

Data-Driven Experiences with Powerful Visualizations

New Chart Wizard

  • New Chart Wizard component: Create a chart using data from a grid, another data-bound component or an external source with the Chart Wizard control, now available in Telerik UI for ASP.NET Core/MVC and Kendo UI for jQuery.
  • Enhanced Telerik Reporting capabilities: 2025 Q1 release brings multiple enhancements for an advanced reporting experience like GraphQL data native support, raw data export in desktop designers, a modernized report engine cache for .NET, performance improvements in the data engine and more.

Adaptive & Responsive, Enterprise-Ready Components

registration form with one-time password

  • New OTP (one-time password) UI control: Build more secure apps with the new OTP component in Telerik UI for ASP.NET Core/MVC, Kendo UI for Angular and UI for jQuery. One-time passwords play a vital role in helping prevent unauthorized access and better protecting user data.
  • Telerik UI for Blazor FloatingAction Button: We’ve enriched the Blazor UI library with a FloatingActionButton control that accompanies the new DockManager. Add instant and robust interactivity to your Blazor app by showing options on a sleek dial when a button is clicked.
  • All screen sizes covered: Deliver a flawless user experience with adaptive and responsive UI in Toolbar, ColorPicker and TabStrip UI components across Telerik and Kendo UI. To future-proof your applications for evolving screen sizes, we plan to expand adaptiveness and responsiveness to all UI components throughout 2025.

More GenAI Integration Assets to Innovate and Differentiate

GenAI widget at work

  • AI Prompt component integration with Microsoft.Extensions.AI preview package: Build AI-powered features in your .NET applications more efficiently with out-of-the box abstractions for integrating popular AI services into your web apps.
  • New AI Prompt control in Telerik UI for AJAX: Integrate GenAI capabilities into your WebForms apps. The component enables sending of prompts and then mapping the response.

Productivity Boost with Robust Features

How does Telerik DevCraft cut development time?

  • Multiple DataGrid improvements across the board: Export to PDF, row spanning, resizable Grid, drag handle and hint customization, and more are added to the Telerik and Kendo UI Grids.
  • PDFViewer annotations: Customization in Telerik UI for ASP.NET Core/MVC and Kendo UI for jQuery PDFViewer is even easier with new annotations (text highlighting and free text annotations).
  • More drag-and-drop features in Angular Gantt: Drag-to-edit task duration and moving tasks with a simple drag-and-drop are now supported by the Kendo UI for Angular Gantt Chart.
  • Document Processing Libraries (DPL) enhancements: Optical Character Recognition (OCR) is added to the Telerik Document Processing Libraries, allowing the conversion of scanned images and PDFs into machine-readable and editable texts. The barcode generation API allows adding 1D and 2D barcodes to your documents
  • Updated debugging technology: Keyboard shortcuts to speed up task execution, Auth Inspector Tab for Requests, Redesigned Rules Logic and UI, including indicators for rules that won’t execute as expected and suggestions on how to fix them, drag-and-drop support for Rule Actions, redesigned sized view and many more features are now available to help you boost your debugging game with Telerik Fiddler Everywhere.

Licensing Updates

A few years ago, we introduced a licensing mechanism for Kendo UI, and with Q1 2025, we are expanding the same straightforward and flexible licensing mechanism to the rest of the products in the portfolio. Our goal is to provide a unified approach to licensing across our collection of libraries, allowing teams to focus on innovation while maintaining compliance with ease. We believe that this change better serves the evolving needs of developers and organizations and will enable us to further improve the license management experience we provide.

Don’t worry, the process is simple. You just need a License Key File. Learn more about the License Key Files in the dedicated blog post.

Telerik UI for Xamarin End of Life

Microsoft ended support for Xamarin in May 2024, and we continued maintaining our component library until now to give developers time to transition. Effective February 19, 2025, Progress will follow suit and discontinue Telerik UI for Xamarin. It will no longer be available for sale, nor will it be part of DevCraft. To further assist with the transition, we will provide limited support and critical updates until February 2026.

Upgrade Today

To benefit from 2025 Q1 new additions, simply download the latest packages. Existing Progress Telerik or Kendo UI customers can do this right from Your Account page, or by updating your NuGet package reference to the latest version in your solutions. For our Angular, React and Vue libraries, you just need to install the latest npm packages.

]]>
urn:uuid:5a5d15d1-bd92-46a8-8d1b-eef9bbbce304 License Key Files in Telerik and Kendo UI Products 2025 Update Starting with Q1 2025, all users of Telerik and Kendo UI need a valid license key file in new and existing projects. Learn more. 2025-02-12T14:52:41Z 2025-02-26T04:51:04Z Maria Ivanova Starting with Q1 2025, all users of Telerik and Kendo UI need a valid license key file in new and existing projects. Learn more.

A few years ago, we introduced a licensing mechanism for our JavaScript products. With Q1 2025, we are expanding the same straightforward and flexible licensing mechanism to the rest of the products in the Progress Telerik and Kendo UI portfolio.

Our mission has always been to drive the utmost productivity for developers, enabling them to build high-quality, modern and engaging applications without unnecessary friction. At the same time, organizations need efficient ways to manage their software licensing and compliance.

Until now, licensing key files (LKF) were required only in Kendo UI products and were available only as manual download and installation. With today’s Q1 2025 release, we are introducing an improved streamlined licensing mechanism that supports both trial and license holders for all Telerik and Kendo UI libraries and tools.

Our goal is to provide a unified approach to licensing across the portfolio, allowing teams to focus on innovation while maintaining compliance with ease. We believe that this change better serves the evolving needs of developers and organizations and will enable us to further improve the license management experience we provide. Stay tuned!

What Is Changing?

Starting with Q1 2025, all users of the Telerik and Kendo UI components and tools will need to apply a valid license key file to both new and existing projects. The update of the licensing mechanism helps you modernize the way you integrate Telerik and Kendo UI products in your projects.

How to Add a License Key

Each of the products in the portfolio has a dedicated documentation article describing in detail how to license it. These links are listed in the table below. Here are the two main approaches.

For a manual installation or upgrade, the new licensing model follows these simple steps:

  1. Obtain a license key – After downloading a trial, purchasing or renewing your license for Telerik and Kendo UI products, you will be able to obtain a license key file from Your Account dashboard.
  2. Add the key to your project – Place the license key file in your project directory.
  3. Automatic validation – The Telerik products will automatically detect the key file and validate it, enabling full functionality without requiring manual activation. For Kendo UI products, you need to perform an npx activation command as described in the documentation.

To simplify the process, we have also automated this step in our web installers, Control Panel tool, Visual Studio and Visual Studio Code extensions. If you are installing or upgrading through these tools, they will automatically download your personal developer license and store the license key in your home directory, making it available for all projects that you develop on the machine. This way, you won’t need to bother about the licensing part as it will be automatically resolved.

Which Products Need a License Key File?

The current Q1 2025 release introduces the license key file mechanism for the following products:

For all Kendo UI products, this licensing mechanism has been in place since 2023, and now it will only require an update of the license key file when adopting new release versions. This includes:

When Does the Change Take Effect?

The licensing update is effective starting on February 12, 2025 along with the Telerik and Kendo UI Q1 2025 release.

Support and Resources

We understand that licensing changes may raise questions, and we are here to support you every step of the way. For additional information on the topic, we have a dedicated licensing FAQ page and, as always, the team is ready to assist through our technical support system.

]]>
urn:uuid:59262eb5-5772-4382-9ab0-552647418d74 Integrating GraphQL into Blazor Learn how to integrate a GraphQL service into a Blazor application with different service queries and Telerik UI for Blazor controls. 2025-02-11T16:09:47Z 2025-02-26T04:51:04Z Héctor Pérez Learn how to integrate a GraphQL service into a Blazor application with different service queries and Telerik UI for Blazor controls.

In this post, you will learn how to integrate a GraphQL service into your Blazor-based applications by creating a sample CRUD application and using Progress Telerik UI for Blazor components. This will give you a better perspective for handling similar integrations in your own projects. Let’s begin!

Important: All operations performed on the GraphQL service in this exercise are not persistent, so you will receive responses representing their correct execution, but you won’t see the change reflected in the final database.

Creating the Sample Project

To get practical experience with this topic, let’s start by creating a sample project. In the template selector, you need to configure a new solution using the Blazor Web App template. Select the Interactive render mode as Server, the Interactivity location as Global and the Framework as .NET 9.0, as follows:

Setting Up a Blazor Project in Visual Studio 2022

Next, you need to configure the project to use Telerik UI for Blazor components by following the guide in the official documentation, which will allow us to create quick and aesthetic interfaces.

Easily Generating Data Models

For our sample project, we’ll use the GraphQLZero service, which allows free practice with their API.

Although we could start the project by generating queries manually in text variables, this can be counterproductive in the long run due to multiple reasons:

  • The schema definition may change.
  • Queries need to be manually searched and updated in strings.
  • It’s not typed, which can cause errors.

To solve these issues, we’ll use the GraphQlClientGenerator project. This project is regularly maintained, and they’ve added support for integration with C# 9, allowing data models to be generated in a super simple way.

The process consists of the following steps:

  1. Install the GraphQlClientGenerator NuGet package.
  2. Open the .csproj file. (You can do this by double-clicking on the project.)
  3. Add a new PropertyGroup section with the following information:
<PropertyGroup>
    <OutputType>Exe</OutputType>
    <TargetFramework>net9.0</TargetFramework>

    <!-- GraphQL generator properties -->
    <GraphQlClientGenerator_ServiceUrl>https://graphqlzero.almansi.me/api</GraphQlClientGenerator_ServiceUrl>
    <!-- GraphQlClientGenerator_Namespace is optional; if omitted the first compilation unit namespace will be used -->
    <GraphQlClientGenerator_Namespace>$(RootNamespace)</GraphQlClientGenerator_Namespace>
    <GraphQlClientGenerator_CustomClassMapping>Consumption:ConsumptionEntry|Production:ProductionEntry|RootMutation:TibberMutation|Query:Tibber</GraphQlClientGenerator_CustomClassMapping>
    <GraphQlClientGenerator_IdTypeMapping>String</GraphQlClientGenerator_IdTypeMapping>
    <!-- other GraphQL generator property values -->
</PropertyGroup>

In the code above, you can see that I’ve added the URL pointing to the GraphQL service; you can replace this endpoint with another one to generate models from that endpoint. Similarly, I’ve added GraphQlClientGenerator_IdTypeMapping, which allows modifying how the row id is generated. This is because by default it generates a Guid type, but the service uses numeric identifiers.

  1. Modify the existing ItemGroup with the following configuration:
<ItemGroup>
    <PackageReference Include="GraphQL.Client" Version="6.1.0" />
    <PackageReference Include="GraphQL.Client.Serializer.Newtonsoft" Version="6.1.0" />    
    <PackageReference Include="GraphQlClientGenerator" Version="0.9.24" IncludeAssets="analyzers" />
    <PackageReference Include="Telerik.UI.for.Blazor" Version="6.2.0" />
        <CompilerVisibleProperty Include="GraphQlClientGenerator_ServiceUrl" />
        <CompilerVisibleProperty Include="GraphQlClientGenerator_Namespace" />
        <CompilerVisibleProperty Include="GraphQlClientGenerator_IdTypeMapping" />
</ItemGroup>

The code above will install the necessary libraries to work with GraphQL.Client, which will allow us to easily invoke GraphQL services.

Similarly, you can see that I’ve added a couple of CompilerVisibleProperty directives that make MSBuild properties visible to the analyzer during compilation. And, finally, in the GraphQlClientGenerator package reference, I added the IncludeAssets attribute to include the generated analyzers. All this allows for successful compilation and is necessary for entities to be automatically generated.

Once compilation has been performed, if we go to Dependencies | Analyzers | GraphQlClientGenerator | GraphQlClientGenerator.GraphQlClientSourceGenerator | GraphQlClient.cs, we’ll see that within this class, methods and models have been created that will allow us to work with the GraphQL schema in a typed way from C#.

Creating a CRUD Application to Manage Posts

There are multiple ways to work with a GraphQL service, whether using HTTPClient, Strawberry Shake, among others. In our case, we’ll use the GraphQL.Client NuGet package, which is one of the most used and has been installed if you’ve added the package references within ItemGroup. Before creating the first page of the system, we must go to Program.cs and add a singleton instance of GraphQLHttpClientOptions as follows:

var builder = WebApplication.CreateBuilder(args);
...
builder.Services.AddSingleton(provider =>
{
    var options = new GraphQLHttpClientOptions
    {
        EndPoint = new Uri("https://graphqlzero.almansi.me/api")
    };

    return new GraphQLHttpClient(options, new NewtonsoftJsonSerializer());
});

var app = builder.Build();

This will allow us to reuse this instance as specified in the library’s documentation.

Creating a Page to Display Service Data

Let’s start by creating the page to display the existing posts in the database. To do this, we’ll create a new page component called PostList inside the Components folder. Within this page, we’ll take advantage of the power of Telerik controls, defining the graphical interface as follows:

@page "/posts"
@using BlazorGraphQLDemo.Models
@using GraphQL
@using GraphQL.Client.Http
@using GraphQL.Client;
@using GraphQL.Client.Serializer.Newtonsoft
@using Telerik.Blazor.Components
@rendermode InteractiveServer
@inject GraphQLHttpClient HttpClient
@inject NavigationManager Navigationn
@inject IJSRuntime JSRuntime

<h3>Post List</h3>

<TelerikGrid Data="@posts" Pageable="true" PageSize="10" Sortable="true" Groupable="true">
    <GridColumns>
        <GridColumn Field="Id" Title="ID" Width="50px" />
        <GridColumn Field="Title" Title="Title" />
        <GridColumn Field="Body" Title="Body" />
    </GridColumns>
</TelerikGrid>

In the code above, we’ve used a TelerikGrid control to quickly display data in grid form. The Data property expects a variable called posts, which we define in the code section as follows:

@code {
    private List<Post> posts = new List<Post>();        

    protected override async Task OnInitializedAsync()
    {
        await LoadPosts();
    }

    private async Task LoadPosts()
    {
        var builder =
           new QueryQueryBuilder()
             .WithPosts(
               new PostsPageQueryBuilder()
                 .WithData(
                   new PostQueryBuilder()
                     .WithAllScalarFields()));        

        //Generated query:
        // query {
        //     posts {
        //         data {
        //             id
        //             title
        //           body
        //         }
        //     }
        // }

        var query = new GraphQLRequest
            {
                Query = builder.Build()
            };
        
        var response = await HttpClient.SendQueryAsync<PostListData>(query);
        posts = response.Data.Posts.Data;        
    }
}

Let’s describe a bit what happened in the code above. We start by defining the posts variable in which the posts obtained from the service will be stored.

Next, in the OnInitializedAsync method, the LoadPosts method is invoked, within which the typed query that will generate the final query to be executed against the GraphQL service is defined. I’ve placed a comment so you can see the query obtained from that execution.

Then an instance of type GraphQLRequest is created that allows sending requests to the GraphQL service.

And, finally, the request is executed through the SendQueryAsync method, passing the generated query as a parameter, deserializing the response to the PostListData type and assigning the result to posts.

The next step is to create a new folder called Models, within which we’ll create a class called GraphQLResponse as follows:

public class GraphQlResponse
{
    public PostsData Posts { get; set; }
}
public class PostsData
{
    public List<Post> Data { get; set; }
}

Now let’s go to the file located in Components | Layout | NavMenu.razor, which we’ll edit by adding a new element to the menu:

<div class="nav-item px-3">
    <NavLink class="nav-link" href="posts">
        <span class="bi bi-list-nested-nav-menu" aria-hidden="true"></span> Posts
    </NavLink>
</div>

Once we’ve applied the code above, you’ll see the following screen when running the application and navigating to the Posts option in the menu:

The Telerik DataGrid control displaying posts retrieved from the GraphQL service

It’s amazing how in just a few lines of code we’re displaying information obtained from the service in a clear and presentable way.

Creating New Records

Now that we’ve retrieved all the records and displayed them in a Blazor DataGrid, let’s see how to create a new record. For this, let’s use the TelerikButton and TelerikWindow controls with the purpose of being able to add a new record without leaving the page. We’ll achieve this by adding the following lines to the PostList.razor page:

<TelerikGrid ...>

<TelerikButton OnClick="@ShowCreatePostDialog">Create New Post</TelerikButton>

<TelerikWindow @bind-Visible="@isCreatePostDialogVisible">
    <WindowContent>
        <EditForm Model="@newPost" OnValidSubmit="@CreatePost">
            <DataAnnotationsValidator />
            <ValidationSummary />
            <div>
                <label>Title:</label>
                <InputText @bind-Value="newPost.Title" />
            </div>
            <div>
                <label>Body:</label>
                <InputText @bind-Value="newPost.Body" />
            </div>
            <TelerikButton ButtonType="@ButtonType.Submit">Save</TelerikButton>
            <TelerikButton OnClick="@CloseCreatePostDialog">Cancel</TelerikButton>
        </EditForm>
    </WindowContent>
</TelerikWindow>

On the other hand, let’s add three methods in the code section to handle window visibility, as well as carry out the operation of creating the new record:

@code {
    private bool isCreatePostDialogVisible;
    private Post newPost = new Post();
    ...
    private async Task CreatePost()
    {

        var mutation =
            new MutationQueryBuilder()
                .WithCreatePost(
                    new PostQueryBuilder().WithAllScalarFields(),
                        new CreatePostInput
                            {
                                Body = newPost.Body,
                                Title = newPost.Title
                            }
                )
                .Build(Formatting.Indented, 2);

        //Generated Query:
        // mutation {
        //     createPost(input: { title: "New Title", body: "New Body" }) {
        //         id
        //         title
        //       body
        //     }
        // }

        var request = new GraphQLRequest
            {
                Query = mutation
            };

        var graphQLResponse = await HttpClient.SendMutationAsync<GraphQlResponse>(request);

        posts.Add(graphQLResponse.Data.CreatePost);
        isCreatePostDialogVisible = false;
    }

    private void ShowCreatePostDialog()
    {
        newPost = new Post();
        isCreatePostDialogVisible = true;
    }

    private void CloseCreatePostDialog()
    {
        isCreatePostDialogVisible = false;
    }
}

In the code section above, we created the ShowCreatePostDialog and CloseCreatePostDialog methods as auxiliary methods that allow handling window visibility, while the CreatePost method is responsible for defining a mutation for creating a new post with the information inserted by the user.

Finally, we need to modify the GraphQlResponse class by adding the CreatePost property, since this is the definition returned by the service:

public class GraphQlResponse
{
    public PostsData Posts { get; set; }
    public Post CreatePost { get; set; }
}

When running the application, you’ll see a button below the TelerikGrid control which, when pressed, will show a new window to enter the data for the new post:

A floating window that allows entering the information for the new post using the TelerikWindow control

Now let’s see how to delete a record.

Deleting a Record

The next functionality we’ll add to the application will be to delete a post from the list. Let’s take advantage of the TelerikGrid control feature that allows adding commands to execute tasks as follows:

<TelerikGrid ...>
    <GridColumns>
        ...
        <GridCommandColumn>            
            <GridCommandButton Command="Delete" OnClick="@DeletePost">Delete</GridCommandButton>
        </GridCommandColumn>
    </GridColumns>
</TelerikGrid>

With the code above, we’ve added a button to the graphical interface that we can link to a custom method to delete a post, which is as follows:

@code{
    ...
    private async Task DeletePost(GridCommandEventArgs args)
    {
        var post = args.Item as Post;

        var idParameter = new GraphQlQueryParameter<string>("id", "ID!", null);

        var mutation =
        new MutationQueryBuilder()
            .WithParameter(idParameter)
            .WithDeletePost(idParameter);

        //Generated Query:
        // mutation($id: ID!) {
        //     deletePost(id: $id)
        // }

        var request = new GraphQLRequest
            {
                Query = mutation.Build(Formatting.Indented, 2),
                Variables = new { id = post.Id }
            };

        if(graphQLResponse.Data.DeletePost)
        {
            posts.Remove(post);
            await JSRuntime.InvokeVoidAsync("alert", $"Post with ID {post.Id} was deleted");
        }        
    }
}

In the code above, the idParameter parameter is defined which we pass as a parameter to the mutation and to the definition of the deletePost operation. You can see the generated query in the commented code. Similarly, during the creation of the GraphQLRequest instance is where the value for the id variable is defined.

Another thing we need to do is define a new class to represent the deletion response as follows:

public class DeleteResponse
{
    public bool DeletePost { get; set; }
}

When running the code above, you can see a new button with the title Delete, which when pressed will delete the post from the list:

The TelerikGrid control showing that the record with Id 7 has been deleted

Once we have implemented the functionality to delete a post, let’s see how to edit an existing post.

Editing a Post

The last operation we need to implement is being able to edit a post. We’re going to approach this differently, editing the element on a different page. Let’s start by adding the new command to navigate to the new page within the TelerikGrid control:

<TelerikGrid ...>
    <GridColumns>
        ...
        <GridCommandColumn>
            ...
            <GridCommandButton Command="Edit" OnClick="@EditPost">Edit</GridCommandButton>
        </GridCommandColumn>
    </GridColumns>
</TelerikGrid>

Let’s define the custom command by adding the following method to the code section:

private async Task EditPost(GridCommandEventArgs args)
{
    var post = args.Item as Post;
    Navigation.NavigateTo($"/editpost/{post.Id}");
}

The code above receives the parameter of the element on which the button is pressed and then navigates to the editpost page passing the record id as a parameter. This page hasn’t been created yet, so we’ll proceed to create the new page component inside the Components folder with the name EditPost.razor, which looks like this:

@page "/editpost/{id:int}"
@using BlazorGraphQLDemo.Models
@using GraphQL
@using GraphQL.Client.Http
@using Telerik.Blazor.Components
@inject NavigationManager Navigation
@rendermode InteractiveServer
@inject GraphQLHttpClient HttpClient
@inject IJSRuntime JSRuntime

<h3>Edit Post</h3>

<EditForm Model="@post" OnValidSubmit="@UpdatePost">
    <DataAnnotationsValidator />
    <ValidationSummary />
    <div>
        <label>Title:</label>
        <InputText @bind-Value="post.Title" />
    </div>
    <div>
        <label>Body:</label>
        <InputText @bind-Value="post.Body" />
    </div>
    <TelerikButton ButtonType="@ButtonType.Submit">Save</TelerikButton>
    <TelerikButton OnClick="@Cancel">Cancel</TelerikButton>
</EditForm>

As we mentioned earlier, this component represents a new page that expects the id of the post we want to modify. On this page, we use an EditForm component to modify the post information. Below I show you the code to carry out both the retrieval of the requested post and the modification of the post:

@code {
    [Parameter] public int Id { get; set; }
    private Post post = new Post();

    protected override async Task OnInitializedAsync()
    {
        await LoadPost();
    }

    private async Task LoadPost()
    {
        var idParameter = new GraphQlQueryParameter<string>("id", "ID!", null);

        var postFragment = new PostQueryBuilder()
            .WithId()
            .WithTitle()
            .WithBody();

        var query = new QueryQueryBuilder()
            .WithParameter(idParameter)
            .WithPost(
                postFragment,
                idParameter
            );

        //Generated Query:
        // query($id: ID!) {
        //     post(id: $id) {
        //         id
        //         title
        //       body
        //     }
        // }

        var request = new GraphQLRequest
            {
                Query = query.Build(Formatting.Indented),
                Variables = new { id = Id }
            };

        var response = await HttpClient.SendQueryAsync<GraphQlResponse>(request);
        post = response.Data.Post;
    }

    private async Task UpdatePost()
    {
        var idParameter = new GraphQlQueryParameter<string>("id", "ID!", null);
        var inputParameter = new GraphQlQueryParameter<UpdatePostInput>("input", "UpdatePostInput!", new UpdatePostInput());

        var mutation = new MutationQueryBuilder()
            .WithParameter(idParameter)
            .WithParameter(inputParameter)
            .WithUpdatePost(
                new PostQueryBuilder()
                    .WithId()
                    .WithTitle()
                    .WithBody(),
                idParameter,
                inputParameter
            );
        //Generated query:
        // mutation($id: ID!, $input: UpdatePostInput!) {
        //     updatePost(id: $id, input: $input) {
        //         id
        //         title
        //       body
        //     }
        // }

        var request = new GraphQLRequest
            {
                Query = mutation.Build(Formatting.Indented),
                Variables = new
                {
                    id = post.Id,
                    input = new { title = post.Title, body = post.Body }
                }
            };

        var response = await HttpClient.SendQueryAsync<GraphQlResponse>(request);

        await JSRuntime.InvokeVoidAsync("alert", $"Post updated successfully. Title: {response.Data.UpdatePost.Title}, Body: {response.Data.UpdatePost.Body}");

        Navigation.NavigateTo("/posts");
    }

    private void Cancel()
    {
        Navigation.NavigateTo("/posts");
    }
}

In the code above, within the LoadPost method, I show you a way in which you could represent the use of fragments that can be reused in other queries in a typed way.

Similarly, in the UpdatePost method, an inputParameter called UpdatePostInput defined in the GraphQL schema is used and is required to update an element (as can be seen in the generated query). This parameter along with idParameter are used to create the query that will allow executing the mutation to edit a post. On the other hand, the Cancel method performs navigation to the previous page without executing any changes.

Finally, you must update the GraphQlResponse class as follows to support the service responses:

public class GraphQlResponse
{
    public PostsData Posts { get; set; }
    public Post CreatePost { get; set; }
    public Post UpdatePost { get; set; }
    public Post Post { get; set; }
}

When applying the changes above, you’ll see a new button in the TelerikGrid control that will allow navigation to the new page to make edits to an element. When making the change and pressing the Save button, you’ll see the changes applied to the entity as follows:

A message showing the changes on the selected entity in the TelerikGrid

With this, we have finished implementing all CRUD operations on the GraphQL service.

Conclusion

Throughout this post, you have learned how to integrate a GraphQL service into a Blazor application by performing different types of service queries, as well as leveraging Telerik UI for Blazor controls to create quick and beautiful graphical interfaces. It’s time for you to get to work and extend the application even further with new functionalities!


Want to try it yourself? Telerik UI for Blazor comes with a free 30-day trial.

Get Started

]]>
urn:uuid:306b27d6-605f-412e-994b-370ffd28c542 Using Angular in a Windows Forms Application See how to use Kendo UI for Angular components in Telerik UI for WinForms applications to exchange communication and events. 2025-02-10T14:11:22Z 2025-02-26T04:51:04Z Jefferson S. Motta See how to use Kendo UI for Angular components in Telerik UI for WinForms applications to exchange communication and events.

In this post, I’ll demonstrate how to use Progress Kendo UI for Angular components in Telerik UI for WinForms applications. You’ll learn the pitfalls and how to implement communication to Angular from WinForms and get back events from Angular.

I’m sharing the complete source code, which is fully functional using Telerik UI, .NET 8 and Angular, on my GitHub.

Note: This post was written before the launch of .NET 9 or Angular 19, but don’t forget you can get started with both of those new versions.

If You Are Asking Yourself: Why Would I Do This?

There are several scenarios where this could be applied:

  • Start the migration from legacy WinForms applications to Angular
  • Integration from local resources (on-premises databases and other WinForms resources)
  • Build lightweight UI applications in complex WinForms apps
  • Build WinForms applications with the look and feel of a web application
  • Execute distributed UIs running in Docker, IIS or Cloud
  • Update the UI without updating the WinForms app client application

These transition scenarios from your legacy applications can help you use the active production resources while developing the new service/application. A hybrid solution can keep the current WinForms while empowering developers to build the client-side application.

The sky is the limit.

Let’s Do It

To replicate this sample, you must create a WinForms app and an Angular project to host the desired controls. If you are integrating with a legacy WinForms app, you just need to create the Angular project.

Install the last version of Angular on the terminal prompt:

1.npm install -g @angular/cli.

Enter in a target directory for example C:\Telerik, and create the new project:

1.ng new my-app

Choose the options for CSS:

Enable or not SSR (Server-Side Rendering) or SSG (Static Site Generation). I prefer SSG for small applications to avoid constant network traffic:

Wait for the installation to finish:

About the Objective of This Application Sample

For this case, I’m demonstrating using Telerik UI for an Angular Chart control and treating the event click on WinForms.

Note: Progress Telerik builds an environment on the web, which is fantastic; thousands of samples are available online to get started on Telerik technologies. In this case, I’m using part of the code from this post.

Kendo UI for Angular App

Follow the steps to configure and use the controls from WinForms.

  1. We are going to be using Kendo UI components, specifically the Kendo UI for Angular Pie Chart. Before creating the chart and its functionality, let’s install it so our app has access to it. At the root of our my-app project, type this installation command:
ng add @progress/kendo-angular-charts

There are multiple things this command does for us (like install the charts and its dependencies. Read more about not only that command but other special install cases in the Kendo UI docs.

Now, add kendo-angular-charts to your main.

It is necessary to create the pages and component hosts and add an interface (verb) and a CustomEvent to return data.

  1. Start creating the pages with the controls you like to use. Let’s generate an angular component called graph-control to house these, the graph-control will host the Angular chart component:

    ng g c graph-control
  2. Add your new component to the app.routes.ts the route for the pages that hosts the components:
import { Routes } from '@angular/router';
import { GraphControlComponent } from './graph-control/graph-control.component';
 
export const routes: Routes = [
    {
     path: 'graph-control', 
     component: GraphControlComponent 
    }
];
  1. Create the component to host the control with the command line. In this sample, we are currently demonstrating just the chart-component:
1.ng g c win-chart
  1. Customize the control.

Add the interface that will be used to exchange the data (receiveData) integrated with WinForms. I call these verbs because you can add more than one interface to transfer data:

1.declare global {
2.  interface Window {
3.    receiveData: (data: any) => void;
4.  }
5.}

Now, right inside our WinChart Component, we need to create a public winFormsData: any = null; variable to hold our data in.

  1. Next, let’s incorporate local storage to preserve our data. We can use it to store our data between page refreshes; there is nothing more infuriating to a user than losing progress. In our init function, we can get the data from local storage and update our winFormsData value if savedData exists here.
1.public winFormsData: any = null;
2. constructor() {
3.    window.receiveData = (data: any) => {
4.      this.winFormsData = data;  
5.      localStorage.setItem('winFormsData', JSON.stringify(data)); 
6.    }; 
7.  }
8.  
9.  ngOnInit() {
10.    const savedData = localStorage.getItem('winFormsData');
11.    if (savedData) {
12.      this.winFormsData = JSON.parse(savedData);
13.    }
14.  }

Add a click event for the chart to use in the component.html:

1.  onSeriesClick(event: SeriesClickEvent): void {
2.    const category = event.category;
3.    const value = event.value;
4.    
5.    console.log('Category:', category);
6.    console.log('Value:', value); 
7.
8.    const message = JSON.stringify({ category, value });
9.
10.    // Create a new custom event
11.    const eventClick = new CustomEvent('MyClick', {
12.      detail: { message: message }, // Pass any necessary data
13.    });  
14.
15.    window.dispatchEvent(eventClick);  
16.  }

Tip: This is a pitfall; pay attention to the JSON you will return. The incorrect format for the JSON will crash the delivery:

1.const message = JSON.stringify({ category, value });

Remove the default HTML from win-chart.component.html and let’s go ahead and add a Kendo UI Chart that will use that series click we just made.

1.<div *ngIf="winFormsData === null">Loading....</div>
2.// check the var winFormsData 
3.
4.<div *ngIf="winFormsData !== null"> 
5.  <kendo-chart    
6.    (seriesClick)="onSeriesClick($event)">
7.    <kendo-chart-title
8.      color="black"
9.      font="12pt sans-serif"
10.      text="WinForms x Angular - Data integration"
11.    >
12.    </kendo-chart-title>
13.    <kendo-chart-legend position="top"></kendo-chart-legend>
14.    <kendo-chart-series>
15.      <kendo-chart-series-item
16.        [data]="winFormsData"
17.        [labels]="{ visible: true, content: label}"       
18.        [type]="typeChart"
19.        categoryField="name"
20.        colorField="color"
21.        field="value">
22.      </kendo-chart-series-item>
23.    </kendo-chart-series>
24.  </kendo-chart>
25.</div>

On the graph-control page, add the HTML to bind:

1.<app-win-chart></app-win-chart>

To speed us along, I’ll supply the complete file for win-chart.component.ts (it is also available on my GitHub repository):

1.import { Component } from '@angular/core';
2.import { ChartsModule, LegendLabelsContentArgs, SeriesClickEvent, SeriesType } from "@progress/kendo-angular-charts";
3.import { CommonModule } from '@angular/common';
4.
5.declare global {
6.  interface Window {
7.    receiveData: (data: any) => void;
8.  }
9.}
10.
11.@Component({
12.  selector: 'app-win-chart',
13.  standalone: true,
14.  imports: [ChartsModule, CommonModule],
15.  templateUrl: './win-chart.component.html',
16.  styleUrls: ['./win-chart.component.css']
17.})
18.export class WinChartComponent {
19.  public winFormsData: any = null;
20.  public typeChart: SeriesType = "pie";
21.  
22.  constructor() {
23.    window.receiveData = (data: any) => {
24.      this.winFormsData = data;  
25.      localStorage.setItem('winFormsData', JSON.stringify(data)); 
26.    }; 
27.  }
28.  
29.  ngOnInit() {
30.    const savedData = localStorage.getItem('winFormsData');
31.    if (savedData) {
32.      this.winFormsData = JSON.parse(savedData);
33.    }
34.  }
35.
36.  public label(args: LegendLabelsContentArgs): string {
37.    return `${args.dataItem.name}`;
38.  } 
39.
40.  onSeriesClick(event: SeriesClickEvent): void {
41.    const category = event.category;
42.    const value = event.value;
43.    
44.    console.log('Category:', category);
45.    console.log('Value:', value); 
46.
47.    const message = JSON.stringify({ category, value });
48.
49.    // Create a new custom event
50.    const eventClick = new CustomEvent('MyClick', {
51.      detail: { message: message }, // Pass any necessary data
52.    });  
53.
54.    window.dispatchEvent(eventClick);  
55.  }
56.
57.}

Now that your Angular app is ready, let’s start using the WinForms app.

Configuring the WinForms App

In the WinForms app, I isolated the host component WebView2 on a UserControl, AngularWebControl.cs, so all components have the same UserControl base and share the same behavior.

The WebView2 is necessary to hold the Angular application from the URL and interact with the WinForms.

This is the solution’s files from the C# project will looks like this:

The AngularDefs.cs hosts the definitions that hold the Angular project in a single place. This also could be environment variables to avoid hard-coded data:

1.namespace app_winforsm;
2.internal static class AngularDefs
3.{
4.    // URL of the Angular application
5.    public const string Url = "https://aw.jsmotta.com/";
6.
7.    // Route to the graph component
8.    public const string RouteGraph = "graph-control";
9.
10.    // Verb to receive data in the Angular component
11.    public const string ChartVerb = "receiveData";
12.}

The AngularWebControl.cs holds the interface’s tasks. I added some explanations to the code below. It defines the interface with the component, reads the click event, and passes it to the event handler.

1.using Microsoft.Web.WebView2.Core;
2.using Microsoft.Web.WebView2.WinForms;
3.using System.Text.Json;
4.using Telerik.WinControls.UI;
5.
6.namespace app_winforsm;
7.internal partial class AngularWebControl : UserControl
8.{
9.    // WebView Control
10.    private WebView2? _webView;
11.
12.    // Event to handle chart item click - it could be only OnItemClick
13.    public event EventHandler? OnChartItemClick;
14.
15.    // The data to be passed to the Angular component
16.    private dynamic? Data { get; set; }
17.
18.    // a label to show the title of the control
19.    // in a real-world scenario, we can extend this component and add other controls
20.    private RadLabel? Title { get; set; }
21.
22.    public AngularWebControl()
23.    {
24.        InitializeComponent();
25.    }
26.    public async void LoadData(string title, dynamic data)
27.    {
28.        if (Title == null)
29.        {
30.            Title = new RadLabel
31.            {
32.                Text = title,
33.                Dock = DockStyle.Top,
34.                Width = this.Width,
35.                AutoSize = true,
36.                Font = new Font("Arial", 12, FontStyle.Bold),
37.                ThemeName = "Windows11"
38.            };
39.
40.
41.            this.Controls.Add(Title);
42.
43.            Title.MouseUp += Title_MouseUp;
44.        }
45.
46.        this.Title.Text = title;
47.
48.        if (_webView == null)
49.        {
50.            _webView = new WebView2
51.            {
52.                Visible = true,
53.                Dock = DockStyle.Fill
54.            };
55.
56.            this.Controls.Add(_webView);
57.
58.            var userDataFolder1 = Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.LocalApplicationData), $"AngularWinFormsApp_{this.Name}");
59.                    
60.            var environment1 = await CoreWebView2Environment.CreateAsync(userDataFolder: userDataFolder1);
61.
62.            // The environment is created to avoid loss of data in the session
63.            await _webView.EnsureCoreWebView2Async(environment1);
64.
65.
66.            _webView.CoreWebView2.NavigationCompleted += WebView_NavigationCompleted;
67.           
68.            // Event to receive data from Angular
69.            _webView.CoreWebView2.WebMessageReceived += CoreWebView2_WebMessageReceived;
70.
71.            _webView.CoreWebView2.Navigate($"{AngularDefs.Url}{AngularDefs.RouteGraph}");
72.
73.            if (OnChartItemClick != null)
74.            {
75.                // This is the trick to receive data from the Angular component
76.                await _webView.CoreWebView2.ExecuteScriptAsync(@"
77.                    window.addEventListener('MyClick', function(event) {
78.                        window.chrome.webview.postMessage(event.detail.message);
79.                    });
80.                ");
81.            }
82.        }
83.
84.        // Send the data to the Angular component
85.        this.Data = data;
86.    }
87.
88.    private void Title_MouseUp(object? sender, MouseEventArgs e)
89.    {
90.        if (e.Button == MouseButtons.Right)
91.        {
92.            // An easter egg to show the WebView console
93.            // when pressing right click on the RadLabel
94.            ShowWebViewConsole();
95.        }
96.    }
97.
98.    // Event handler to handle messages received from the WebView2
99.    private void CoreWebView2_WebMessageReceived(object? sender, CoreWebView2WebMessageReceivedEventArgs e)
100.    {
101.        // Retrieve the message from the event
102.        var message = e.TryGetWebMessageAsString();
103.
104.        // Display the message or perform any action
105.        OnChartItemClick?.Invoke(message, EventArgs.Empty);
106.    }
107.    private async void WebView_NavigationCompleted(object? sender, CoreWebView2NavigationCompletedEventArgs e)
108.    {
109.        if (_webView == null) return;
110.
111.        _webView.Visible = true;
112.
113.        if (!e.IsSuccess)
114.        {
115.            // Return a custom messsage based on the error to avoid default Webview error page
116.            switch (e.WebErrorStatus)
117.            {
118.
119.                case CoreWebView2WebErrorStatus.ConnectionAborted:
120.                    ShowErrorMessage("Connection refused. Please make sure the server is running and try again.");
121.                    break;
122.                case CoreWebView2WebErrorStatus.Unknown:
123.                case CoreWebView2WebErrorStatus.CertificateCommonNameIsIncorrect:
124.                case CoreWebView2WebErrorStatus.CertificateExpired:
125.                case CoreWebView2WebErrorStatus.ClientCertificateContainsErrors:
126.                case CoreWebView2WebErrorStatus.CertificateRevoked:
127.                case CoreWebView2WebErrorStatus.CertificateIsInvalid:
128.                case CoreWebView2WebErrorStatus.ServerUnreachable:
129.                case CoreWebView2WebErrorStatus.Timeout:
130.                case CoreWebView2WebErrorStatus.ErrorHttpInvalidServerResponse:
131.                case CoreWebView2WebErrorStatus.ConnectionReset:
132.                case CoreWebView2WebErrorStatus.Disconnected:
133.                case CoreWebView2WebErrorStatus.CannotConnect:
134.                case CoreWebView2WebErrorStatus.HostNameNotResolved:
135.                case CoreWebView2WebErrorStatus.OperationCanceled:
136.                case CoreWebView2WebErrorStatus.RedirectFailed:
137.                case CoreWebView2WebErrorStatus.UnexpectedError:
138.                case CoreWebView2WebErrorStatus.ValidAuthenticationCredentialsRequired:
139.                case CoreWebView2WebErrorStatus.ValidProxyAuthenticationRequired:
140.                default:
141.                    ShowErrorMessage("An error occurred while loading the page.");
142.                    break;
143.            }
144.            return;
145.        }
146.
147.        var jsonData = JsonSerializer.Serialize(Data);
148.
149.        // Here is the connection with the interface (verb) defined in the Angular component
150.        var script = $"window.{AngularDefs.ChartVerb}({jsonData});";
151.
152.        await _webView.CoreWebView2.ExecuteScriptAsync(script);
153.    }
154. 
155.}

The Message.cs is the model for the click event interact from the Angular app.

Here is the use case for the controls in FormMain.cs. I added one control dynamically and another using drag-and-drop from the Toolbox. It is important to note that a distinct property name is needed to avoid collisions on the WebView2 sessions; this is a pitfall.

I use mocked data in this sample, but you will probably read from a data source in real-world applications.

1.using System.Text.Json;
2.using Telerik.WinControls;
3.using Telerik.WinControls.UI;
4.
5.namespace app_winforsm;
6.
7.public partial class FormMain : RadForm
8.{
9.    private readonly AngularWebControl? _angularWebControl;
10.
11.    public FormMain()
12.    {
13.        InitializeComponent();
14.
15.        // Load the AngularWebControl programatically
16.
17.        _angularWebControl = new AngularWebControl { Name = "_angularWebControl" };
18.        _angularWebControl.Dock = DockStyle.Fill;
19.
20.        splitPanel1.Controls.Add(_angularWebControl);
21.
22.        // Subscribe to the OnChartItemClick event
23.        _angularWebControl.OnChartItemClick += AngularWebControl_OnChartItemClick;
24.
25.        LoadData();
26.    }
27.
28.    private void AngularWebControl_OnChartItemClick(object? sender, EventArgs e)
29.    {
30.        if (sender is null)
31.            return;
32.
33.        var message =
34.            JsonSerializer.Deserialize<Message>(sender.ToString() ?? throw new Exception("Data is not a json."));
35.
36.        RadMessageBox.ThemeName = "Windows11";
37.        RadMessageBox.Show($"You clicked on {message.Category} with value {message.Value}", "Chart Item Clicked",
38.            MessageBoxButtons.OK, RadMessageIcon.Info);
39.    }
40.
41.    private void LoadData()
42.    {

Note: In a production project, you will load the data from your repository!

43.
44.        var data = new[]
45.        {
46.            new { name = "Gastroenteritis", value = 40, color = "red" },
47.            new { name = "Appendicitis", value = 25, color = "blue" },
48.            new { name = "Cholecystitis", value = 15, color = "green" },
49.            new { name = "Pancreatitis", value = 10, color = "yellow" },
50.            new { name = "Diverticulitis", value = 10, color = "orange" }
51.        };
52.
53.        _angularWebControl?.LoadData("Common gastro deseases in hospitals", data);
54.
55.        var dataAges = new[]
56.        {
57.            new { name = "0-10", value = 1, color = "red" },
58.            new { name = "11-20", value = 10, color = "blue" },
59.            new { name = "21-30", value = 20, color = "green" },
60.            new { name = "31-40", value = 25, color = "yellow" },
61.            new { name = "41-50", value = 15, color = "orange" },
62.            new { name = "51-60", value = 20, color = "purple" },
63.            new { name = "61-70", value = 8, color = "brown" },
64.            new { name = "71+", value = 7, color = "pink" }
65.        };
66.
67.        this.angularWebControl1.LoadData("Patiant ages in gastro deseases", dataAges);
68.    }
69.}

And This Is the Result!

As you can see, the two charts share the same interface and UserControl. You can’t see them but are in a distinct web session. The session is isolated to preserve data and for security, the same UserControl could be using distinct credentials according to the URL passed as parameter.

Workflow of This Sample

In the picture below, we can “see” the flow of coding and on execution until the callback when the end-user clicks on the chart, download the source code from GitHub and give it a try.

Conclusion

The interoperability isn’t complex, and this may leverage a mixed team and design better interfaces with low memory consumption than the classical WinForms apps.

In edge computing, I imagine the interface running on a server near the end user, even a local docker/Azure/AWS near the client machine, avoiding long web traffic.

Download the code and see how it works and all the possibilities this feature can bring to your business and services.

Feel free to use the code or contact me on LinkedIn. Also, remember that Progress Telerik offers free support during the evaluation period.

Try Telerik DevCraft Today

References

]]>
urn:uuid:2e1e2dad-3265-4d73-821c-ff09600f2033 Coding Azure 2: Configuring an Azure SQL Database The critical part of a business application is the database. Here’s how to configure a serverless database in Azure SQL (and how to do it for free, if you just want to experiment). 2025-02-06T14:04:00Z 2025-02-26T04:51:04Z Peter Vogel The critical part of a business application is the database. Here’s how to configure a serverless database in Azure SQL (and how to do it for free, if you just want to experiment).

This post is going to walk through setting up an Azure SQL database for a typical cloud-native application. That actually requires setting up three things:

  • An instance of a SQL Server database management system: For this post, I’ll call that “the database engine”
  • A virtual machine that the database engine will be running on: I’ll call that “the database server”
  • A collection of tables managed by my database: I’ll call that “the database”

When I’m referring to all three of these three separate meanings, I’ll refer to “the database resource.”

Defining Your Database Resource

The first step is to surf to the Azure portal, type “Azure SQL” into the search box in the top center of the page, select Azure SQL from the dropdown list to get to the Azure SQL overview page.

Once on the Azure SQL overview page, from the menu across the top of the page, at the left-hand end, click the + Create menu choice to take you to the Select SQL deployment option page.

Currently (January 2025), this page gives you three choices. For the sample app I’m creating for this series, I’m going to set up the simplest version: In the SQL databases choice, I selected the Single database option in the Resource Type dropdown and clicked the Create button. That’s actually not a bad choice for most applications that don’t need the scalability of, for example, a managed instance. For an in-depth discussion of the right options, I liked this overview.

Defining the Database Engine

Clicking the + Create button for your database option will bring you to the Create SQL database wizard. For the case study in this series, I’m going to pick the free Azure SQL database option that’s currently offered. That makes sense when you only want a database resource to use for demo purposes (e.g., this series of blog posts). If you’re doing this in real life, you’ll pick something that doesn’t have the free offer’s limitations: 32 GB of data and 100,000 free processing seconds every month (there’s been at least one month where I used up all my processing seconds just in writing this series).

This offer also commits me to creating a serverless solution (the database server may be auto-scaled if it sees enough activity and, if that happens, I’ll be charged for extra cores as I use them). I’m OK with that because my sample app won’t ever be busy enough to trigger auto-scaling with the resulting charges. In real life, Serverless is a pretty good choice for a production system if, for no other reason, auto-scaling reduces the administration required to size your database resource correctly.

The other choice (Provisioned) only makes sense if you’re confident that you can both predict the demand for your database resource so that you can size your database resource correctly and you don’t expect that demand to fluctuate much around that prediction (if you have a lot of fluctuation, then you’ll be paying for a lot of resources that you’re not using much of the time).

I need to assign my database resource to a resource group and give it a name (upper/lower case and spaces are allowed here). I used WarehouseMgmt. I’m, eventually, going to put all the resources for my application in this resource. Other organizations might put all their Azure SQL databases in a single resource group even though the databases are used by several different applications (a strategy that can make sense if you have a single group that’s responsible for all your databases).

After that, you need to give your database server a name (I used WarehouseDB).

Next, you’ll need to pick a database server to host your database engine (not a real server, of course—you’re creating a virtual server, which means that you’re just being allocated some amount of processing power, disk space and memory on some computer in an Azure datacenter). If you don’t already have a virtual server set up that you can use, click the Create New button to be taken to the Create SQL Database Server page.

Defining the Database Server: Part I

On this page, you’ll need to:

  • Give your server a name (this name gets rolled into a URL and will have to be all lowercase and without spaces). Copy the resulting URL—you’ll need it later (if you don’t write it down, don’t panic—you can always return to the Overview page for your database resource to retrieve the URL). I used warehousedbserver.
  • Pick an Azure region from the Location dropdown list. I used Canada Central.
  • Pick an Authentication method. For the purposes of this case study, I want to show you how to write the code required so that this server can only be accessed by other out-of-the-box Azure resources, so I left that choice at the default of Use Microsoft Entra-only authentication. Other choices would, for example, let you authenticate against your on-premises identity provider (i.e., Active Directory Domain Services).
  • Specify which user will act as your administrator: Click on the Set admin link and, from the panel that appears, pick a user from the provided list of identities set up in your tenant’s Entra ID (you’ll probably pick you).

After clicking the OK button, you’ll return to your Create SQL Database page.

Defining the Database Server: Part II

Your next choices are to decide:

  • If you want your database engine to share processing resources with other engines by selecting Elastic Pools (this makes sense if you have multiple database engines that never hit capacity at the same time and can share the pool). For my sample app, I did not.
  • Whether you want a Production or Development workload environment. These two choices represent bundles of capacity choices that you can modify a little further down on this page. Picking Development, for example, limits you to a single vCore with 32 GB of space; picking Production turns on Hyperscale and removes the limit on size. For my sample application, I picked Development because it’s the cheapest choice.

Configuring Your Database Server Capacity

Now you can override some/all of the Production/Development capacity settings by clicking on the Configure database link. Because I picked the free offer, most of the choices I could make here can’t be changed. The only choice I can make is to auto-pause the database engine after I’ve used up my 100k of free seconds for the month and not to restart the database engine until next month (which I did to avoid running up any charges).

In real life, you can pick:

  • The maximum and minimum number of vCores you’ll be allocated (that will give you more asynchronous/parallel processing for handling multiple requests)
  • The maximum space that your database resource can grow to
  • Whether you’d like your database resource to be copied to another zone

That last option probably requires some explanation. By default, you have failover protection within the Microsoft datacenter hosting your database resource. You can pick zone redundancy, which gives you failover coverage for your database resource in in another datacenter in the same region (e.g., East US, UK South). All the Microsoft datacenters in the same region have roughly the same network response speed so, if your database resource does fail over to another datacenter, you shouldn’t notice any degradation in performance (though there will probably be a short disruption during the failover). Zone redundancy will, of course, cost you more.

After you click the Apply button, you’ll be back to your Create SQL Database page.

Defining the Database Server: Part III

Your last choice is where your backups are stored. Because I picked the free offer, I don’t get a choice (I still get backups, though—I just don’t get to pick where they’ll be kept). The good news here is that I’m getting the cheapest option (locally redundant with all the copies of my backups are stored in the same datacenter that’s hosting my database resource) and, for this case study, that’s what I want.

The other options will store your backups in another datacenter in the same region or in a datacenter that’s (usually) over 300 miles away, or both (at higher cost, of course).

Configuring Access

The next tab in the wizard lets you set your networking options. Selecting No access effectively defers setting access until after your database resource is created when you can configure its firewall settings. But you will, in the end, have to let something access your database engine, and you might as well select your initial setup here.

Initially, I want to enable the Public option, which allows devices to access outside of the Azure cloud so that I can use my standard database management tools (SQL Server Management Studio and Azure Data Studio) from my desktop to configure my database.

However, I also wanted to limit that public access to just the computer that I’m going to use to manage my database (which is the computer I’m currently using when creating my Azure SQL database). I used the “add your client” option on the network tab that created an exception in the database server’s firewall to let my computer in.

In real life you don’t, in the long run, want the public option enabled—it’s an unnecessary exposure to a cruel and dangerous world. Eventually, you’ll return to database engine and, from the networking tab in the left-hand menu, select the Networking option to remove your client computer from the list of allowed devices.

You’ll also want to set the Allow Azure services … option that enables applications in your tenant to access the database engine. I turned this on because my case study will include a Web Service running in an Azure App Service that will need to access this database engine.

Loading Data

I then skipped ahead to the Additional settings tab and, under Data source, selected Sample. This loads a copy of one of Microsoft’s standard sample databases, AdventureWorksLT. I can do this because I really don’t care what data I use for this case study, so Adventure Works is fine with me.

You, on the other hand, will want to load some data from your organization (you have a lot of choices for loading existing data. Those options include just loading from an on-premises SQL Server backup (provided you can get the backup file up to the cloud).

After selecting the Sample data, I clicked the Review + create button to go to the wizard’s last page where I clicked the Create button to create my database server and its virtual machine.

Accessing the Database

To confirm that everything is set up correctly, you can use Azure Data Studio (ADS) or SQL Server Management Studio (SSMS) to connect to your database server and review its contents.

Azure Data Studio

After starting ADS, select the Create a Connection choice to connect to your Azure SQL database server. To make that connection, when the Connection panel appears:

  • In the Server textbox, enter the URL for database server (note: not the database engine) that you copied earlier. In my case, that’s warehousedbserver.database.windows.net.
  • From the Authentication type dropdown, select the authentication method you set when you created the database server (I picked Entra ID as my authentication provider, so I picked that option).
  • From the Account dropdown list, select Add an account and go through the process of logging into Azure.
  • In the Name textbox, give this connection a name that you’ll recognize when you want to reconnect with this database engine from ADS.
  • Finally, click the Connect button.

You should connect to your database server and get a panel listing the databases managed by your engine (e.g., master and whatever database you just created—in my case, that’s the database I called WarehouseDB that holds the Northwind sample database).

To view any tables in your database, just double-click on your new database (I did and was able to drill down to the sample AdventureWorksLT tables and their data that I loaded).

SQL Server Management Studio

Personally, I prefer using SSMS, but that’s probably just habit on my part. After starting SSMS, you’ll be presented with the Connect to Server dialog that will let you connect to your database server (notice: not the database engine—it’s an important distinction). In that dialog:

  • In the Server name textbox, enter the URL for the database server that you copied earlier (in my case, that’s warehousedbserver.database.windows.net).
  • In the Authentication dropdown, select the choice that matches the authentication method you picked when creating your database server. When I created my database engine, I picked Entra ID as my authentication choice, so I selected Microsoft Entra MFA from this list.
  • In the User name textbox that appears, enter your Azure username.
  • Click the Connect button—you’ll almost certainly be asked to authenticate to Azure. Once you’re logged in you can drill down to the tables in your database in the panel on the left side of the SSMS window.

Next Steps

Now, with your database created, you can start thinking about creating an application that uses it. That’s my next post, where I’ll create an Azure App Service/Web app, create a Web Service in Visual Studio that access the database and deploy that Web Service to my App Service.

]]>
urn:uuid:31654a5b-edd5-4b37-850c-4a86854f2875 Coding Azure 1: Creating Cloud-Native Apps If you find that the documentation and training around creating a cloud-native Azure app isn’t giving you the help you need, Coding Azure intends to fix that. Here’s how. 2025-02-06T14:03:02Z 2025-02-26T04:51:04Z Peter Vogel If you find that the documentation and training around creating a cloud-native Azure app isn’t giving you the help you need, Coding Azure intends to fix that.

Here’s what this series of blog posts is about: All the steps to create a functioning cloud-native application, involving all the necessary resources—from configuring the resources through securing them and writing the necessary code to use them. This series will show you how to do that, using both Visual Studio and Visual Studio Code and with a single sample app as my case study.

Which raises the question: Why is this necessary?

The Business Reason

For me, the driver for going to cloud-native applications is to avoid spending my time on things that don’t make my clients (or organization) better. I would never, for example, suggest to my clients that building their own Accounts Payable system is a good idea (in fact, I would actively try to talk them out of it). Unless presented with some compelling way that a custom AP system will make your organization better, you should be doing AP like everyone else, which means you should buy, not build, your AP system (typically, the technical term for doing Accounts Payable differently from everyone else is “fraud”).

For me, that same philosophy extends to managing the standard parts of your application infrastructure: database engines, storage, server management and so on. In other words, all the stuff that Azure gives you.

There are, however, two problems with that philosophy: First, based on my experience as a developer, creating cloud-native applications with Microsoft Azure resources is … challenging. Second: While you don’t have to manage these resources, in order to use them you still have to know quite a lot of stuff and, mostly, stuff you didn’t know before.

The spirit of the SOLID principles is that we build complicated programs out of simple components. Extended to the application level, that means we build complicated systems out of simpler applications—microservices. While there are many definitions of microservice, the two that matter to me is that a) a microservice is small enough that the team that supports it can fully understand it and b) that knowledge can be passed on to any new addition to the team in a single sprint.

The Knowledge Reason

And, as this series is going to demonstrate, even a minimal cloud-native Azure application requires that team—or “you”—to be familiar with multiple Azure Resources (App Services, Azure SQL databases, Key Vaults and more). If you go beyond a “minimal” application to create a more reliable, scalable, extendable and maintainable application, then you’ll need to be familiar with even more Azure resources (queues and events, for example).

That means not only knowing how to use those resources wisely and how to set them up/configure them, but also how to access those resources from code, how to write the required code that lives inside these resources and how to deploy that code into those resources during development using the typical “Microsoft developer” toolset (Visual Studio and Visual Studio Code).

But wait, there’s more: You need to be equally interested in how to best secure your part of the application. While you may not have oversight of how the application’s resources are managed after it’s built, you can do your part to close out as many opportunities for a security breach as possible as you create the application.

My Goal

It’s important to me that this series helps you understand both the what and the why of all of those activities.

To facilitate that understanding, in these posts I’m going to lean on the various GUIs available to you by favoring the Azure Portal over the Azure CLI or PowerShell. I have nothing against CLIs for scripting repetitive tasks, but they tend to obscure what’s going on. Interacting with the GUIs (in my opinion) promotes understanding of what, exactly, you’re doing (and walking you through those GUIs gives me an opportunity to explain the options available to you).

Understanding the available options is important because, when it comes time for you to perform these tasks, you won’t be doing exactly what my case study app requires—you’ll need to tweak my solutions to meet your needs. So, to support understanding, I’m also going to avoid using utilities that take care of multiple steps for you. Those utilities are terrifically useful for implementing the typical solution, but if you need to do something that’s not the typical implementation—well, you need to understand what’s going on.

The Sample App

About my sample app: I’m a business application developer, and business apps are all about retrieving and updating data. As a result, my next post is going to be about setting up an Azure SQL database (it’s one of the posts in this series that’s going to be all configuration).

After that post, I’m going to use Visual Studio to create a server-side C# Web API application using minimal APIs to retrieve data from my database. I’ll also set up an Azure Web App/App Service and deploy that Web Service to it.

Once I’ve got that Web Service working, I’ll have it access the database and configure the service and database so that the only application that can access the database will be my Web Service.

After that, I’ll create in Visual Studio Code both client-side and server-side apps that access my Web Service and deploy those apps to another App Service/Web App. Once that’s working, I’ll configure my Web Service so that the only thing that can talk to it will be my frontend (and I’ll configure that app so that it can only be accessed by authorized users).

After those first posts, you’ll have everything you need to create a secure three-tier app and be able to deploy your code from either Visual Studio or Visual Studio Code. Along the way, I’ll look at some of the tools you’ll need for debugging your code, implementing logging, redeploying Azure resources and any other typical development tasks that crop up.

Summing Up and the Next Steps

This series is intended to be everything a team member would need to know to support a minimal microservice (except for the logic specific to an application—that will be up to you).

After that, I’ll show how to (not necessarily in this order):

  • Use Azure Key Vault to hold secrets and certificates (and access those resources from your application)
  • Use Azure Functions in place of a Web Service hosted in an App Service
  • How to leverage the Web API to unify, simplify and document your Web Services
  • Integrate queue-based processing to create reliable, scalable and extensible apps
  • Replace that queue-based processing with the Azure Event Grid to move to a publish/subscribe, event-driven design

All these components will be running in my Azure tenant, and their access will be restricted (with one exception) to specific clients, also running in my Azure tenant. Even then, those permitted clients will be restricted, able to perform specific authorized activities on those components they can access. The one exception to that is my application’s frontend, which can be accessed by authorized users who, presumably, live outside of Azure.

If you feel that I’ve missed some topic a team would need, email me (peter.vogel@phvis.com) or put a note in the comments.

And, with all that out of the way, here’s the link to that next post on setting up an Azure SQL database.

]]>
urn:uuid:172547e9-031b-4f0e-890a-3ff222ce3700 Case Study: Creating Tools for Telerik Scheduler for Blazor Packaging code into reusable objects can save you time—if you make the right design decisions. Here’s a case study for creating tools that work with the Telerik Scheduler for Blazor. 2025-02-05T10:57:03Z 2025-02-26T04:51:04Z Peter Vogel Packaging code into reusable objects can save you time—if you make the right design decisions. Here’s a case study for creating tools that work with the Telerik Scheduler for Blazor.

After writing a ton of code, you realize that you could package it up as a reusable object and never have to write that code again. Here’s some advice on doing that well, with a case study for creating tools for working with the Progress Telerik UI for Blazor Scheduler.

In previous posts about working with recurring events in the Progress Telerik UI for Blazor Scheduler, I’ve sketched out the kind of code that lets you build an app with Scheduler that enables users to modify schedules, override scheduled events, add new events and even add new schedules (here’s the first post in that series).

Of course, the next step for any developer would be to bundle that code into a set of reusable objects that would simplify working with the Scheduler in the next application. The goal would be to be able to configure all of Scheduler’s functionality like this:

<TelerikScheduler
  Data="DataList"
  RecurrenceRuleField="RecurrenceRuleString"
  
  AllowUpdate="true"
  AllowCreate="true"
  AllowDelete="true"

  OnDelete="@DeleteSchedule"
  OnUpdate="@UpdateSchedule"
  OnCreate="@CreateSchedule">

  …other settings…
</TelerikScheduler>

And then support that all that functionality with something like these five lines of code:

List<RecurringEvent> DataList;
SchedulerManager<RecurringEvent> sm = new() { AddSchedule = true };

void DeleteSchedule(SchedulerDeleteEventArgs e) { sm.DeleteSchedException(e); }
void UpdateSchedule(SchedulerUpdateEventArgs e) { sm.UpdateExceptionSched(e); }
void CreateSchedule(SchedulerCreateEventArgs e) { sm.CreateExceptionSched(e); }

And that’s all very doable—there are links to the objects I’ve used here (RecurringEvent and SchedulerManager) in this post to prove that. But that project could easily turn into a time-consuming example of “developer’s disease”: If it can be coded, then it must be coded.

No, it doesn’t.

In fact, if there’s one mantra to use as guidance in creating reusable objects, it’s this: You’re creating reusable objects to make your and your team more productive in the future by handling the “typical tasks” in applications you might build for your organization in the future. “Typical tasks,” in this case, being any task that would otherwise require repeating code you’ve already written in the next application.

What you’re not doing is creating a commercial product to support any developer creating any application in any organization.

So, my goal in this post is to look at the kind of design decisions you might make in creating reusable objects, not to provide a set of universal support objects for Scheduler. The code, while I’m pretty sure works, is here as an example of what the results of those decisions look like.

The RecurringEvent Class

For example, Scheduler needs a List of event objects to bind to, and working with Scheduler consists of manipulating that List of event objects. In creating a set of reusable code, I don’t want to support working with any possible object—that would be far too big a task. My first design decision, therefore, is to design the only object that my code will work with.

In this case, I created a class called RecurringEvent that I could tweak as needed to support the code I would write. To simplify that class even further, I had it inherit from the Telerik abstract Appointment class, which has all the properties that Scheduler needs.

Now, having created that class, I can also extend it as necessary to both support the reusable code I’ll write and handle any typical tasks. As an example, I added these members to my RecurringEvent class:

  • A default constructor: This constructor helps set every RecurringEvent object’s Id property to a unique GUID. This eliminates the need to set the Id property when generating event objects.

  • RecurrenceRuleString: This is a string representation of the schedule (in RFC5545 format) and is required by Scheduler. My implementation of this property not only gives Scheduler what it wants but also sets the base Appointment class’s RecurrenceRule property (used when generating future events) to update whenever RecurrenceRuleString is changed so that the two properties are always in sync.

  • RecurrenceExceptionsString: Scheduler suppresses regularly scheduled occurrences for any events added to the schedule object’s RecurrenceExceptions property—I’ll need to save that list of exceptions in any database that holds my schedule. To support that, the RecurrenceExceptionsString property returns the RecurrenceExceptions array as a comma-delimited string for easy storage. Similarly, when retrieving a schedule object from my database, I’ll want to reload my RecurrenceExceptions property from that string, and my RecurrenceExceptionsString property supports doing that.

  • GenerateSchedule: To share a schedule with other applications, I’ll need to store the regularly scheduled events in a database. This method holds the code to return a List of RecurringEvent objects for every generated event between the schedule’s Start and the schedule’s OriginalEnd date

And that’s great … but it’s reasonable to assume that any future application that uses Scheduler is probably going to require some additional, application-specific properties that RecurringEvent doesn’t have. Rather than extend my solution to work with any class in the world, I took advantage of the DataItem property that RecurringEvent inherits from the Telerik Appointment class.

The DataItem property is declared as type object so DataItem will hold any .NET object (anything from a string to a collection). I just need to set up my code so that whatever is in a schedule object’s DataItem property is passed to any related objects that SchedulerManager creates.

Now, when I come to use RecurringEvent in some other application, I can store any additional information any application needs in the DataItem property, confident that SchedulerManager will propagate that property appropriately.

And here’s the code for the RecurringEvent class.

SchedulerManager

Having defined the only object that my code has to work with, the next step is to write the code that uses that object—initially only supporting the typical cases.

So, for example, my SchedulerManager exposes the three methods required to support the parts of Scheduler’s functionality that I’m interested in: adding, updating and deleting scheduled events (I covered those methods in my earlier series). Now, for the typical case, I can just wire up those methods to Scheduler’s OnCreate, OnUpdate and OnDelete methods as I showed at the start of this post (recognizing that if I wire up both OnCreate and OnUpdate, the Blazor UI may throw an exception when the user double-clicks on a scheduled event and modifies it).

If you’re interested, here’s the code for SchedulerManager and SchedulerDataManager.

Supporting Customization

Once you’re set up to support the typical cases, you can consider extending your reusable objects to support some minimal set of expected customizations. Or not—supporting the typical case is a perfectly good place to stop and move on to the next application. Having said that, some customizations are easier to support than others.

For example, comprehensive reusable objects like SchedulerManager will sometimes be doing too much for you—there will be cases where you’d prefer to exercise more control than the reusable object provides.

To support that case, I separated my code into two classes: While SchedulerManager supports all the functionality that I want in Scheduler, it does that by calling methods in another class called SchedulerDataManager. SchedulerDataManager provides only the minimal set of functionality for managing Scheduler’s List of events. So, if I find that SchedulerManager is doing too much, I can ignore SchedulerManager and build my application using SchedulerDataManager.

The alternative customization scenario is when your reusable object isn’t doing enough. You can support that case by having your reusable object fire events at critical moments in its processing—that enables you to add additional code to the reusable object’s default processing when necessary. As an example, I’ve set up SchedulerManager to fire events whenever an event object is added, updated or deleted (and cleverly called those events OnEventAdded, On EventUpdated and OnEventDeleted).

As an example of when these events might be useful, SchedulerManager’s default processing assumes that I’ll update my database after the user is finished interacting with my application and clicks a Save button. For example, SchedulerManager stores the event objects it deletes in a DeletedItems list that I can use to remove data from my database when the user clicks my application’s Save button.

Personally, I like a “Save button strategy” because it also supports having a Cancel button that allows the user to experiment with their schedule without having to save it.

However, there might be a case where I need to update my database as my user makes changes (some application where changes need to be seen in real time, for example). Should that case come up, I could (presumably) handle that by inserting custom database update code into SchedulerManager’s events to extend SchedulerManager’s default processing.

In addition, all three of the events are passed an event parameter that includes a Cancel property that, when set to true, stops SchedulerManager from doing its “normal” processing. This allows me to not only extend SchedulerManager’s processing but also to replace it: I add some code in an event and then set the Cancel property to true to prevent SchedulerManager’s normal processing from happening.

Not Supporting Customization

You’ll need to decide what level of customization you’re willing to support (remembering that “none” is a perfectly good option). There’s also a middle ground open to you: Structure your object to support some potential customization but defer writing the required code to when (and if) you actually need it.

For example, as I discussed in the last part of my series on supporting Scheduler, allowing the user to add multiple schedules can create a problem. It’s a niche case, though, and only occurs if you allow the user to add both new events and multiple schedules to Scheduler. By default, SchedulerManager avoids that problem by preventing the user from adding multiple schedules.

But, rather than cut off the option for multiple schedules completely, I added a property called SupportMultipleSchedules to SchedulerManager. When the property is set to true, SchedulerManager will let the user add multiple schedules. That was all easy to do because all I did was include an empty code block where I would, someday and if I need to, write the more difficult code that handles multiple schedules. Should I ever actually create an application that needs multiple schedules, I’ll fill in that empty code block then.

Another example: While the various events that are fired by SchedulerManager can be cancelled, they don’t prevent the user from deleting a regularly scheduled occurrence (Scheduler handles that pretty much internally, making it difficult to interfere with the process). I didn’t even try supporting that. If I ever write an application where I need to prevent a user from deleting an occurrence from within SchedulerManager, I’ll figure out a way to do it then … and I recognize that may not involve adding code to SchedulerManager at all.

Ignoring a problem isn’t a viable solution for a commercial product. However, I don’t care because my job is to deliver applications to my client, not commercial objects. I’m only interested in reusability to the extent that it enables me (and my client’s IT shop) to deliver subsequent applications faster. And, as much as I like driving up my billable hours, solving problems neither I nor my client currently have isn’t part of my job description. It’s not part of yours, either.


Dive into this yourself: Try Telerik UI for Blazor free for 30 days.


Try Now

]]>
urn:uuid:facbedc7-1e8b-43a3-8ddb-b428f17d5f10 Generating Barcodes with KendoReact See how to create and customize barcodes for your React app with the KendoReact component library. 2025-02-04T19:17:56Z 2025-02-26T04:51:04Z Hassan Djirdeh See how to create and customize barcodes for your React app with the KendoReact component library.

Barcodes are one-dimensional visual representations of data easily scanned and decoded by barcode readers or mobile devices. They are widely used in retail, logistics and healthcare industries to encode product IDs, shipment information or other essential details in a machine-readable format. Whether scanning items at the grocery store checkout or spotting them on product packaging, barcodes are a familiar and integral part of everyday life.

In this article, we’ll explore how to create, customize and implement barcodes in React applications using the KendoReact library.

For a deep dive into how to implement QR codes (a type of 2D barcode) in your React applications, check out our previous article, Creating QR Codes with KendoReact.

The KendoReact Barcode Component

The KendoReact Barcode component, part of the KendoReact Barcodes library, simplifies the process of generating industry-standard barcodes. It supports various barcode symbologies (encoding schemes) and offers customization options for size, colors and text display.

The KendoReact <Barcode /> component is distributed through the @progress/kendo-react-barcodes npm package and can be imported directly:

import { Barcode } from "@progress/kendo-react-barcodes";

Here’s a basic example of how to use the React Barcode component:

import * as React from "react";
import { Barcode } from "@progress/kendo-react-barcodes";

const App = () => <Barcode value="123456789012" type="EAN13" />;

export default App;

This example generates a simple EAN-13 barcode encoding the value 123456789012. By default, the component automatically calculates checksum digits (where applicable) and adapts the barcode rendering to match the selected symbology.

Scanning this barcode with a compatible reader will return the encoded value.

Barcode Types

Barcodes come in various formats or symbologies, each designed for specific use cases in the industry. The KendoReact Barcode component supports a variety of 1D industry barcode types, including:

  • EAN-13: Used internationally for retail product identification.
  • UPC-A: Commonly used in North America for retail packaging.
  • Code 39: Encodes alphanumeric characters, often used in logistics and healthcare.
  • Code 128: A highly efficient symbology for encoding alphanumeric and special characters.
  • MSI: Frequently used for inventory management and warehouse applications.

To specify a barcode type in our component, we can use the type prop:

<Barcode value="CODE-39" type="Code39" />

When specifying a certain barcode type, we should check that the barcode value conforms to the symbology’s rules to avoid rendering errors. For instance, EAN-13 requires 12 digits, while Code 39 supports a variable-length alphanumeric string.

Configuration Options

The KendoReact barcode component provides configuration options to customize its appearance and functionality. In the following section, we’ll explore some of these options.

Size

We can control the barcode size by setting the width and height props. This ensures the barcode fits seamlessly into our application layout.

<Barcode value="123456789012" type="EAN13" width={300} height={100} />

The example above renders a barcode with a width of 300 pixels and a height of 100 pixels.

Color and Background

The color and background props allow us to set the barcode’s foreground and background colors, providing better visual contrast or matching our application’s design theme.

<Barcode
  value="123456789012"
  type="EAN13"
  color="#0055ff"
  background="#f5f5f5"
/>

This example creates a barcode with a blue foreground and a light gray background.

Border

Using the border prop, we can add a border to the barcode. This is useful for highlighting the barcode or integrating it into printed labels.

<Barcode
  value="123456789012"
  type="EAN13"
  border={{ width: 2, color: "#ff0000" }}
/>

This example adds a 2-pixel-wide red border around the barcode.

Padding

The padding prop allows us to add space around the barcode, enhancing its readability and scannability in certain cases.

<Barcode value="123456789012" type="EAN13" padding={50} />

This example renders the barcode with 50 pixels of padding on all sides.

Configuring the Text Label

The barcode component supports customizing the text label displayed below the barcode. We can adjust its appearance using the text prop, which accepts configurations like font size, color and style configuration. We can also enable checksum display using the checksum prop for symbologies that support it.

import { Barcode } from "@progress/kendo-react-barcodes";

const textConfig = {
  color: "#0055ff",
  font: "20px Arial",
};

<Barcode value="123456789012" type="EAN13" text={textConfig} checksum={true} />;

This example styles the label with a blue color and sets the font size to 20 pixels.

A Practical Example—A Product Inventory System

One of the most practical applications of barcodes is in a product inventory system, where each product is assigned a unique barcode for identification, tracking and management. Using the KendoReact Barcode component, we can dynamically generate barcodes for products in our inventory, so each item is easily scannable and traceable.

In real-world applications, product data such as names, prices and SKUs would typically come from a server or API, allowing the system to handle large and dynamically updated inventories. Once this data is available on the client, here’s a simple example of rendering and displaying barcodes for each product in a React application.

import * as React from "react";
import { Barcode } from "@progress/kendo-react-barcodes";

const InventoryItem = ({ id, name, price, sku }) => {
  return (
    <div className="k-card">
      <div className="k-card-header">
        <h4>{name}</h4>
        <p>SKU: {sku}</p>
        <p>Price: ${price}</p>
      </div>
      <div className="k-card-body">
        <Barcode
          type="Code128"
          value={sku}
          width={200}
          height={80}
          text={{
            font: "12px Arial",
            color: "#333",
          }}
        />
      </div>
    </div>
  );
};

const App = () => {
  const products = [
    {
      id: "1",
      name: "Wireless Mouse",
      price: 29.99,
      sku: "WM-2023-001",
    },
    {
      id: "2",
      name: "Mechanical Keyboard",
      price: 99.99,
      sku: "KB-2023-002",
    },
  ];

  return (
    <div
      style={{ display: "grid", gridTemplateColumns: "1fr 1fr", gap: "20px" }}
    >
      {products.map((product) => (
        <InventoryItem key={product.id} {...product} />
      ))}
    </div>
  );
};

export default App;

The above example illustrates a basic inventory system where each product is represented as a card containing product details and a dynamically generated barcode. The barcode uses the Code 128 symbology, an encoding standard suitable for alphanumeric data like SKUs. The sku property of each product is passed to the Barcode component as the value to encode.

Wrap-up

The KendoReact barcode component makes connecting physical and digital systems easy by adding barcode functionality to our React apps. Whether we’re managing product inventory, generating event tickets or handling shipping labels, the component simplifies the creation of barcodes and gives us plenty of ways to customize.

Try it for yourself: KendoReact comes with a free 30-day trial. 

Try Now

]]>
urn:uuid:e37aad4f-3e34-48e4-9319-f0df64bdc3c0 What Is DeepSeek? Dive in Using DeepSeek, .NET Aspire and Blazor A new AI model has taken the tech world, and the actual world, by storm. See how to get started with DeepSeek, .NET Aspire and Blazor. 2025-02-04T15:19:47Z 2025-02-26T04:51:04Z Dave Brock A new AI model has taken the tech world, and the actual world, by storm.

It performs close to, or better than, the GPT 4o, Claude and Llama models. It was developed at a cost of $1.3 billion (rather than the originally reported $6 million)—using clever engineering instead of top-tier GPUs. Even better, it was shipped as open-source, allowing anyone in the world to understand it, download it and modify it.

Have we achieved the democratization of AI, where the power of AI can be in the hands of many and not the few big tech companies who can afford billions of dollars in investment?

Of course, it’s not that simple. After Chinese startup DeepSeek released its latest model, it has disrupted stock markets, scared America’s Big Tech giants and incited TMZ-level drama across the tech space. To wit: Are American AI companies overvalued? Can competitive models truly be built at a fraction of the cost? Is this our Sputnik moment in the AI arms race? (I don’t think NASA was able to fork the Sputnik project on GitHub.)

In a future article, I’ll take a deeper dive into DeepSeek itself and its programming-focused model, DeepSeek Coder. For now, let’s get our feet wet with DeepSeek. Because DeepSeek is built on open source, we can download the models locally and work with them.

Recently, Progress’ own Ed Charbeneau led a live stream on running DeepSeek AI with .NET Aspire. In this post, I’ll take a similar approach and walk you through how to get DeepSeek AI working as he did in the stream.

Note: This post gets us started; make sure to watch Ed’s stream for a deeper dive.

Our Tech Stack

For our tech stack, we’ll be using .NET Aspire. .NET Aspire is an opinionated, cloud-ready stack built for .NET-distributed applications. For our purposes today, we’ll be using it to get up and running quickly and to easily manage our containers. I’m not doing .NET Aspire justice, with all its power and capabilities: Check out the Microsoft documentation to learn more.

Before we get started, make sure you have the following:

  • Docker (to get up and running on Docker quickly, Docker Desktop is a great option)
  • Visual Studio 2022
  • .NET 8 or later
  • A basic knowledge of C#, ASP.NET Core and containers

Picking a Model

To run models locally on our system, we’ll be using Ollama, an open-source tool that allows us to run large language models (LLMs) on our local system. If we head over to ollama.com, let’s search for deepseek.

You might be compelled to install deepseek-v3, the new hotness, but it also has a 404 GB download size. Instead, we’ll be using the deepseek-r1 model. It’s less advanced but good enough for testing—it also uses less space, so you don’t need to rent a data center to use it.

It’s a tradeoff between parameter size and download size. Pick the one that both you and your machine are comfortable with. In this demo, I’ll be using 8b, with a manageable 4.9GB download size. Take note of the flavor you are using, as we’ll need to put it in our Program.cs soon.

deepseek r1 options

Set Up the Aspire Project

Now, we can create a new Aspire project in Visual Studio.

  1. Launch Visual Studio 2022 and select the Create a new project option.

  2. Once the project templates display, search for aspire.

  3. Select the .NET Aspire Starter App template, and click Next.

    aspire selection

  4. Then, click through the prompts to create a project. If you want to follow along, we are using .NET 9.0 and have named the project DeepSeekDemo.

  5. Right-click the DeepSeekDemo.AppHost project and click Manage NuGet Packages….

  6. Search for and install the following NuGet packages. (If you prefer, you can also do it from the .NET CLI or the project file.)

    • CommunityToolkit.Aspire.Hosting.Ollama
    • CommunityToolkit.Aspire.OllamaSharp

    hosting ollama package

    We’ll be using the .NET Aspire Community Toolkit Ollama integration, which allows us to easily add Ollama models to our Aspire application.

  7. Now that everything is installed, you can navigate to the Program.cs file in that same project and replace it with the following.

    var builder = DistributedApplication.CreateBuilder(args);
    
    var ollama = builder.AddOllama("ollama")
                    .WithDataVolume()
                    .WithGPUSupport()
                    .WithOpenWebUI();
    builder.Build().Run();
    

    Here’s a breakdown of what the AddOllama extension method does:

    • AddOllama adds an Ollama container to the application builder. With that in place, we can add models to the container. These models download and run when the container starts.
    • WithDataVolume allows us to store the model in a Docker volume, so we don’t have to continually download it every time.
    • If you are lucky enough to have GPUs locally, the WithGPUSupport call uses those.
    • The WithOpenWebUI call allows us to talk to our chatbot using the Open WebUI project. This is served by a Blazor front end.
  8. Finally, let’s add a reference to our DeepSeek model so we can download and use it. We can also choose to host multiple models down the line.

    var builder = DistributedApplication.CreateBuilder(args);
    
    var ollama = builder.AddOllama("ollama")
                    .WithDataVolume()
                    .WithGPUSupport()
                    .WithOpenWebUI(); 
    
    var deepseek = ollama.AddModel("deepseek-r1:8b");
    
    builder.Build().Run();
    

Explore the Application

Let’s run the application! It’ll take a few minutes for all the containers to spin up. While you’re waiting, you can click over to the logs.

aspire logs

Once all three containers have a state of Running, click into the endpoint for the ollama-openweb-ui container.

openwebui

Once there, select the DeepSeek model and you’ll be ready to go.

deepseek main page

Let’s try it out with a query. For me, I entered an oddly specific and purely hypothetical query—how can a tired parent persuade his daughter to expand musical tastes beyond just Taylor Swift? (Just saying: the inevitable Kelce/Swift wedding will probably be financed by all my Spotify listens.)

deepseek thinking

You’ll notice right away something you don’t see with many other models: It’s walking you through its thought process before sending an answer. Look for this feature to be quickly “borrowed” by its competitors.

After a minute or two, I’ll have an answer from DeepSeek.

taylor answers

Next Steps

With DeepSeek set up in your local environment, the world is yours. Check out Ed’s DeepSeek AI with .NET Aspire demo to learn more about integrating it and any potential drawbacks.

See also:

Any thoughts on DeepSeek, AI or this article? Feel free to leave a comment. Happy coding!

]]>