Stop Redeploying: A Guide to Dynamic Feature Flags in Angular with AWS AppConfig

Because changing a boolean shouldn’t require a 30-minute CI/CD pipeline.

Dariant Virgie Siswadie

It's 4:45 PM on a Friday...

You are packing up for the weekend when a Product Manager messages you: "Hey, that new dashboard widget is crashing for a subset of beta users. Can we turn it off quickly?"

If your Angular application relies on the standard environment.prod.ts for feature toggling, your "quick" fix probably looks like this:

  1. Open environment.prod.ts.
  2. Change enableDashboardWidget from true to false.
  3. Commit and push.
  4. Wait 20 minutes for the CI/CD pipeline to build, test, and deploy.
  5. Invalidate the CloudFront cache.
  6. Pray you didn't accidentally break something else in the rush.

In modern software delivery, a 30-minute cycle to flip a boolean is an eternity.

Deployment (shipping code) and Release (exposing features) are two fundamentally different actions. They should not be tightly coupled.

In this post, we'll explore how to decouple them by moving from static environment files to a dynamic, serverless feature flag system using AWS AppConfig, a Lambda Function URL, and Angular Signals. No third-party feature flag vendor. No SDK installed in the frontend. The Angular app does not even know AWS exists.


Why Not LaunchDarkly / Unleash / Flagsmith or a simple DynamoDB Table?

Third-party feature flag services are genuinely good. If your organization already pays for one, use it. This post is for teams that either cannot introduce a new vendor dependency (compliance, procurement, budget) or already run on AWS and want to keep the blast radius small.

The "just use DynamoDB" approach is tempting too. Spin up a table, stick your flags in it, read them from an API Gateway endpoint. But building your own feature flag system means you also have to build your own feature flag safety nets. What happens when someone sets a timeout flag to -1? What happens when a bad config causes a spike in 5xx errors? You are now maintaining a custom control plane.

AWS AppConfig exists precisely for this use case. It is not a database you query. It is a configuration delivery system with deployment theory baked in. AWS AppConfig gives you several enterprise-grade guardrails out of the box:

  • Gradual Rollouts: Deploy a flag change to 10% of your fleet every minute, rather than a risky all-at-once flip.
  • Automatic Rollbacks: Tie a deployment to a CloudWatch Alarm. If your backend error rate spikes during a rollout, AppConfig automatically halts and reverts the configuration to its previous state — without any human intervention at 3 AM.
  • Validators: Use a JSON Schema or a Lambda function to ensure nobody accidentally sets a numeric timeout flag to a string, or a boolean to "yes".

These are not things that are particularly difficult to build. They are, however, things that are easy to build badly, and annoying to maintain. So let someone else maintain them.


Setting Up AppConfig

Before writing any code, you need four resources in AWS. I will describe them here so the rest of the article makes sense. If you are using CDK, Terraform, or SAM, the resource names map directly.

1. Application. A logical container. We name ours after the project, e.g., QupayaApp.

2. Environment. Represents a deployment target: Development, Staging, Production. Each environment can have its own deployment strategy and CloudWatch alarm associations.

3. Configuration Profile. This is where the flags live. When creating the profile, choose the Feature Flags type rather than freeform. This gives you the structured JSON format and enables the flag management UI in the AppConfig console.

4. Deployment Strategy. We use AppConfig.Linear50PercentEvery30Seconds for non-critical flag changes and AppConfig.AllAtOnce for incident response. For production rollouts of user-facing features, we can define a custom strategy: 10% growth rate, 1-minute interval, 10-minute bake time. That gives us a meaningful window where only a fraction of requests see the new configuration, while CloudWatch alarms monitor for anomalies.

After creating these resources, you deploy the configuration to an environment. This is the step that actually makes the flags available for retrieval. An un-deployed configuration profile is just a draft.


The Serverless Proxy Pattern

A common concern with integrating AWS services on the frontend is the complexity of the SDK and the IAM story. Developers dread setting up AWS Cognito Identity Pools, managing credentials in the browser, and figuring out which IAM policies to attach to an unauthenticated user pool identity.

We avoid this entirely with the Serverless Proxy Pattern.

Angular to Lambda to AppConfig architecture diagram
The Serverless Proxy Pattern: Angular just hits a plain HTTP endpoint and stays blissfully unaware of AWS. (AI Generated)

Instead of Angular talking directly to AppConfig, a lightweight AWS Lambda function sits in the middle:

  1. Angular makes a standard GET request to a public Lambda Function URL.
  2. Lambda retrieves the current configuration from AppConfig.
  3. Angular initializes with the latest flags.

Your frontend is completely agnostic to AWS. It consumes a plain JSON endpoint, which you could swap out for anything else without touching a line of Angular code.


The Elephant in the Room: Is a Public Endpoint a Security Risk?

A common, immediate reaction to this architecture is: "Wait, we have a public endpoint exposing our feature flag configuration? Can anyone just GET this endpoint and see our flags? Isn't that a security risk?"

It is a valid instinct, but the short answer is no, provided you treat feature flags as routing configuration rather than secrets. Here is the candid reality of frontend architecture and why this pattern is safe for enterprise applications:

1. The Browser is an Untrusted Environment If a feature flag dictates whether an Angular component renders, the browser must know the value of that flag. Even if you secured the endpoint with complex authentication, a user could simply open Chrome DevTools, check the Network tab, and read the JSON response. Anything sent to the frontend is inherently public. Hiding the endpoint behind auth does not hide the data from a determined user.

2. Feature Flags are NOT Secrets A feature flag is a boolean or a configuration string (e.g., {"betaCheckout": true}). You should never put sensitive data—like database passwords, internal IP addresses, API keys, or personally identifiable information (PII)—inside a feature flag configuration.

3. Real Security Lives on the Backend (Defense in Depth) This is the most critical concept: A feature flag hides the UI; it does not secure the system. If you have a feature flag called enableDeleteUserButton, a malicious user might intercept the network request, mock the response to true, and make the "Delete User" button appear on their screen.

However, when they click that button, your backend API must act as the actual bouncer. A robust backend implements its own feature flag guards (e.g., a custom decorator on the endpoint that checks if the feature is enabled globally and if the user has authorization). If the backend knows the feature is off, or the user lacks permissions, it simply returns a 403 Forbidden. The frontend flag is purely for User Experience (UX); your backend is the actual vault.

4. Protecting Your AWS Bill While the data isn't sensitive, you still don't want arbitrary websites pinging your Lambda function and driving up costs. You lock this down using a strict CORS policy—hardcoding "Access-Control-Allow-Origin" to your actual production domains rather than *—and potentially placing the Lambda URL behind an AWS WAF (Web Application Firewall) to rate-limit requests.


The Secret Sauce: AWS AppConfig Lambda Extension

AWS AppConfig managed Lambda Extension
The Secret Sauce. Using AWS Managed AppConfig Lambda Extension (AI Generated)

Before we even start, some of you might have already raised an obvious objection: "Doesn't that Lambda introduce cold starts and latency on every single page load?"

Let's address the reality of AWS Lambda Cold vs. Hot Starts.

If we used the standard AWS SDK inside Lambda to query AppConfig over the network on every invocation, yes, we would have a massive latency problem. But we do not do that. We use the AWS AppConfig Lambda Extension.

It is an AWS-managed Lambda Layer that you attach to your function. When a Lambda container spins up for the very first time (a cold start), the extension must make an initial network hop to the AppConfig service to fetch the configuration. This adds a minor latency bump (usually a few hundred milliseconds) to that specific request.

However, in a production application with regular traffic, the vast majority of your Lambda invocations will be hot. The execution environment stays alive, and the extension polls the AppConfig service in the background at a configurable interval (the AWS_APPCONFIG_EXTENSION_POLL_INTERVAL_SECONDS environment variable, defaulting to 45 seconds). It stores the latest configuration directly in local memory.

For all these hot starts, your handler never calls the external AppConfig API. It makes an HTTP GET to http://localhost:2772/applications/{app}/environments/{env}/configurations/{profile} — an endpoint served by the extension process running inside the same sandbox. The response comes back in single-digit milliseconds from local memory. No network hop. No API call billed.

(And if your application has zero tolerance for even a single 500ms cold start, you can easily enable AWS Provisioned Concurrency to keep a pool of containers permanently hot).

The extension also handles configuration versioning internally. It only fetches a new configuration from the AppConfig service if the version has changed since the last poll. This means the vast majority of background polls result in a 304-equivalent no-op.

One thing that tripped us up initially: the extension's poll interval is not the same as how fresh the data is for your users. The poll interval controls how often the extension checks AppConfig. But your Lambda might serve thousands of requests between polls, all using the same cached value. Combined with the browser-side Cache-Control header we will set, the actual propagation delay from "toggle a flag in the console" to "user sees the change" is roughly extension poll interval + browser cache max-age. With both set to 60 seconds, worst case is about 2 minutes. For feature flags, that is perfectly acceptable.


Building It

The Lambda Handler

The handler's job is simple: read from the local extension cache, normalize the AppConfig Feature Flag format into a flat object the frontend can consume directly, and return it with appropriate cache headers.

The normalization step is important. The raw AppConfig Feature Flags format nests flag values under a values key with enabled properties. Our Angular app should not need to know about AppConfig's internal schema. The Lambda acts as an anti-corruption layer, translating between the provider's format and our application's contract.

// index.mjs
export const handler = async (event) => {
  const appConfigUrl =
    'http://localhost:2772/applications/QupayaApp/environments/Production/configurations/FeatureFlags';

  try {
    const response = await fetch(appConfigUrl);

    if (!response.ok) {
      throw new Error(`AppConfig extension returned ${response.status}: ${response.statusText}`);
    }

    const rawConfig = await response.json();

    // AppConfig Feature Flags have a specific structure.
    // We flatten it to a simple { flagName: boolean } map
    // so the frontend stays decoupled from AppConfig's schema.
    const flags = {};
    if (rawConfig.values) {
      for (const [key, value] of Object.entries(rawConfig.values)) {
        flags[key] = value.enabled ?? false;
      }
    }

    return {
      statusCode: 200,
      headers: {
        'Content-Type': 'application/json',
        'Cache-Control': 'public, max-age=60',
        'Access-Control-Allow-Origin': 'https:://yourdomain.com', // Lock this down to your actual domain in production
        'Access-Control-Allow-Methods': 'GET',
        'Access-Control-Allow-Headers': 'Content-Type',
      },
      body: JSON.stringify(flags),
    };
  } catch (error) {
    console.error('Failed to fetch flags from AppConfig extension:', error);

    // Return a safe default rather than a 500.
    // An empty object means "all flags off" — the safest state.
    // The frontend is built to handle this gracefully.
    return {
      statusCode: 200,
      headers: {
        'Content-Type': 'application/json',
        'Cache-Control': 'no-cache',
        'Access-Control-Allow-Origin': 'https:://yourdomain.com', // Lock this down to your actual domain in production,
      },
      body: JSON.stringify({}),
    };
  }
};

A few decisions worth explaining:

Why return 200 on error? The Angular app fetches these flags during bootstrap. If we return a 500, the frontend has two options: crash (terrible UX) or catch the error and use defaults (which is what we want). By returning 200 with an empty body, we push the "use defaults" behavior to the Lambda layer. The frontend always gets a valid JSON response. It does not need error handling for this specific endpoint at all — though we still add it as defense in depth.

Why no-cache on the error response? If the extension is temporarily unhealthy, we do not want browsers caching the empty fallback for 60 seconds. The next request should try again.

Why Access-Control-Allow-Origin: *? Feature flags are not sensitive data. They control which UI components render. If your flags contain genuinely sensitive information (pricing tiers, internal tooling URLs), lock this down to your domain and consider adding authentication to the Function URL.

The Angular Service

This is where the article starts to get Angular-specific. We need a service that fetches the flags, stores them reactively, and exposes a clean API for components to consume.

We use Angular Signals for the state container. Signals are synchronous, work without Zone.js, and integrate naturally with Angular's template rendering. A computed signal gives us type-safe access to individual flags without components needing to understand the shape of the full flags object.

// src/app/core/feature-flag.service.ts
import { Injectable, inject, signal, computed } from '@angular/core';
import { HttpClient } from '@angular/common/http';
import { catchError, map, of } from 'rxjs';
import type { Observable } from 'rxjs';
import { FEATURE_FLAG_API_URL } from './feature-flag.tokens';

export interface AppFlags {
  newDashboardWidget: boolean;
  maintenanceMode: boolean;
  [key: string]: boolean;
}

const DEFAULT_FLAGS: AppFlags = {
  newDashboardWidget: false,
  maintenanceMode: false,
};

@Injectable({ providedIn: 'root' })
export class FeatureFlagService {
  private readonly http = inject(HttpClient);
  private readonly apiUrl = inject(FEATURE_FLAG_API_URL);

  /** The reactive state. Components read from this signal. */
  readonly flags = signal<AppFlags>(DEFAULT_FLAGS);

  /** Convenience accessor for individual flags. */
  isEnabled(flagName: keyof AppFlags): boolean {
    return this.flags()[flagName] ?? false;
  }

  /** Returns a computed signal for a specific flag — reactive in templates. */
  flag(flagName: keyof AppFlags) {
    return computed(() => this.flags()[flagName] ?? false);
  }

  /**
   * Fetches flags from the Lambda endpoint.
   * Returns an Observable<boolean> that emits `true` when done.
   * The app initializer waits for this to complete before rendering.
   */
  loadFlags(): Observable<boolean> {
    return this.http.get<Record<string, boolean>>(this.apiUrl).pipe(
      map((data) => {
        // Merge with defaults so new flags added to DEFAULT_FLAGS
        // still have a value even if the server hasn't deployed them yet.
        this.flags.set({ ...DEFAULT_FLAGS, ...data });
        return true;
      }),
      catchError((err) => {
        console.warn(
          '[FeatureFlagService] Failed to load flags. Using defaults.',
          err
        );
        // Do NOT re-throw. The app must boot with safe defaults.
        return of(true);
      })
    );
  }
}

The injection token for the API URL keeps the service testable and environment-agnostic:

// src/app/core/feature-flag.tokens.ts
import { InjectionToken } from '@angular/core';

export const FEATURE_FLAG_API_URL = new InjectionToken<string>(
  'FEATURE_FLAG_API_URL'
);

A few things to notice:

DEFAULT_FLAGS as a separate constant. This is your contract. Every flag your application references must exist here with a safe default value. When someone adds a new flag to AppConfig but has not deployed it yet, or when the fetch fails entirely, the app still functions. The defaults act as a schema declaration and a safety net simultaneously.

map instead of tap. The original version used tap and returned the AppFlags observable directly, which meant the catchError returned of(true) — a different type than what tap passes through. Using map to explicitly return true keeps the return type consistent: Observable<boolean>. The app initializer does not care about the flags themselves; it just needs to know when loading is done.

Merging with spread. { ...DEFAULT_FLAGS, ...data } ensures that if the server returns a subset of flags (maybe a new flag was just added to DEFAULT_FLAGS but has not been configured in AppConfig yet), the missing flags fall back to their defaults rather than being undefined.

The flag() method returns a computed signal. This is important for template reactivity. If flags are ever refreshed at runtime (we will add this later), components using flag('newDashboardWidget') will automatically re-render. Components using isEnabled() in imperative code get a one-time boolean snapshot, which is appropriate for guard logic or ngOnInit branching.

Blocking the Bootstrap

We need the flags loaded before Angular renders the first component. If we do not block, there is a window where components render with DEFAULT_FLAGS, then re-render when the real flags arrive. Users see a widget appear and immediately disappear — or worse, a layout shift that triggers a CLS penalty.

In Angular 19+ standalone applications, provideAppInitializer is the idiomatic way to block bootstrap. It accepts a factory function that can return a Promise or Observable. Angular waits for it to resolve before rendering.

// src/app/app.config.ts
import {
  ApplicationConfig,
  provideAppInitializer,
  inject,
} from '@angular/core';
import {
  provideHttpClient,
  withFetch,
  withInterceptorsFromDi,
} from '@angular/common/http';
import { FeatureFlagService } from './core/feature-flag.service';
import { FEATURE_FLAG_API_URL } from './core/feature-flag.tokens';
import { firstValueFrom } from 'rxjs';
import { environment } from '../environments/environment';

export const appConfig: ApplicationConfig = {
  providers: [
    provideHttpClient(withFetch(), withInterceptorsFromDi()),
    {
      provide: FEATURE_FLAG_API_URL,
      useValue: environment.featureFlagApiUrl,
    },
    provideAppInitializer(() => {
      const featureFlagService = inject(FeatureFlagService);
      return firstValueFrom(featureFlagService.loadFlags());
    }),
  ],
};

withFetch() tells Angular's HttpClient to use the native fetch API under the hood instead of XMLHttpRequest. This matters for SSR compatibility (Node.js 18+ has native fetch) and is the recommended default for new Angular projects.

firstValueFrom converts our Observable<boolean> into a Promise<boolean>. The observable completes after emitting one value (either from a successful response or the catchError fallback), so firstValueFrom resolves immediately. If the HTTP request hangs indefinitely, this will also hang indefinitely — which is a real problem. We address this in the gotchas section.

The environment file still exists, but now it holds infrastructure URLs, not feature state:

// src/environments/environment.ts
export const environment = {
  production: false,
  featureFlagApiUrl: 'http://localhost:3001/flags', // local mock for development
};

// src/environments/environment.prod.ts
export const environment = {
  production: true,
  featureFlagApiUrl: 'https://abc123.lambda-url.eu-central-1.on.aws/',
};

This is an important distinction. The environment file still controls where to fetch flags from (which varies per environment), but no longer controls what the flags are (which varies per release decision).

Using the Flags in Components

Because we blocked bootstrap, the flags signal is guaranteed to be populated by the time any component initializes. The component code is the boring part — which is exactly what you want.

// src/app/dashboard/dashboard.component.ts
import { Component, inject } from '@angular/core';
import { FeatureFlagService } from '../core/feature-flag.service';
import { NewWidgetComponent } from './new-widget.component';
import { OldWidgetComponent } from './old-widget.component';
import { MaintenanceBannerComponent } from '../shared/maintenance-banner.component';

@Component({
  selector: 'app-dashboard',
  standalone: true,
  imports: [NewWidgetComponent, OldWidgetComponent, MaintenanceBannerComponent],
  template: `
    @if (maintenanceMode()) {
      <app-maintenance-banner />
    }

    <h1>Dashboard</h1>

    @if (newDashboardWidget()) {
      <app-new-widget />
    } @else {
      <app-old-widget />
    }`,
})
export class DashboardComponent {
  private readonly featureFlagService = inject(FeatureFlagService);

  protected readonly maintenanceMode =
    this.featureFlagService.flag('maintenanceMode');
  protected readonly newDashboardWidget =
    this.featureFlagService.flag('newDashboardWidget');
}

Each flag is a computed signal. The template reads it with (). No pipes. No async. No subscriptions to manage. If you later add runtime polling (covered below), the template re-evaluates automatically when a flag changes.

For cases where you need to guard a route rather than toggle a template fragment:

// src/app/core/feature-flag.guard.ts
import { inject } from '@angular/core';
import { CanActivateFn, Router } from '@angular/router';
import { FeatureFlagService, AppFlags } from './feature-flag.service';

export function featureFlagGuard(flagName: keyof AppFlags): CanActivateFn {
  return () => {
    const featureFlagService = inject(FeatureFlagService);
    const router = inject(Router);

    if (featureFlagService.isEnabled(flagName)) {
      return true;
    }

    return router.createUrlTree(['/']);
  };
}

Usage in routes:

{
  path: 'beta-search',
  loadComponent: () => import('./beta-search/beta-search.component'),
  canActivate: [featureFlagGuard('betaSearch')],
}

The Gotchas We Hit in Production

Gotchas in Production
Some gotchas we might find in production (AI Generated)

1. The Bootstrap Timeout Problem

provideAppInitializer blocks rendering until the returned promise resolves. If the Lambda Function URL is unreachable — DNS failure, misconfigured security group, AWS region outage — the promise never resolves. Your users see a blank white screen. Indefinitely.

The catchError in loadFlags handles HTTP errors (4xx, 5xx), but it does not handle the case where the request hangs without responding. The HttpClient does not have a default timeout.

Fix: add a timeout operator.

import { timeout, catchError, map, of } from 'rxjs';

loadFlags(): Observable<boolean> {
  return this.http.get<Record<string, boolean>>(this.apiUrl).pipe(
    timeout(5000), // 5 seconds. If the flags aren't here by then, move on.
    map((data) => {
      this.flags.set({ ...DEFAULT_FLAGS, ...data });
      return true;
    }),
    catchError((err) => {
      console.warn(
        '[FeatureFlagService] Failed to load flags. Using defaults.',
        err
      );
      return of(true);
    })
  );
}

The timeout operator throws a TimeoutError after 5 seconds, which catchError catches. The app boots with defaults. Five seconds is generous — the Lambda typically responds in under 200ms — but it accounts for cold starts and transient network hiccups without making users wait unreasonably.

2. SSR and the Disappearing Fetch

If your Angular app uses SSR (server-side rendering), the app initializer runs on the server too. The server makes the HTTP request to your Lambda endpoint. This works, but introduces two problems:

The server might not be able to reach the Lambda URL. If your Lambda Function URL has IP restrictions, or your SSR server runs in a different VPC, or there is no outbound internet access, the request fails silently and you boot with defaults — on the server. Then the client hydrates with its own fetch (which may succeed), causing a hydration mismatch and a full DOM re-render. The NG0500 hydration error stares at you from the console.

Double fetching. Even when both succeed, you are making the same request twice: once from the server, once from the client during hydration. The server already has the flags baked into the rendered HTML. The client should reuse them.

The cleanest fix for both is to use Angular's TransferState mechanism. The server fetches the flags, stores them in the transfer state, and the client reads them from the serialized HTML instead of making a second request:

import { Injectable, inject, signal, computed, PLATFORM_ID } from '@angular/core';
import { isPlatformBrowser } from '@angular/common';
import { HttpClient } from '@angular/common/http';
import { makeStateKey, TransferState } from '@angular/core';
import { timeout, catchError, map, of, tap } from 'rxjs';
import type { Observable } from 'rxjs';
import { FEATURE_FLAG_API_URL } from './feature-flag.tokens';

export interface AppFlags {
  newDashboardWidget: boolean;
  maintenanceMode: boolean;
  [key: string]: boolean;
}

const DEFAULT_FLAGS: AppFlags = {
  newDashboardWidget: false,
  maintenanceMode: false,
};

const FLAGS_STATE_KEY = makeStateKey<AppFlags | null>('featureFlags');

@Injectable({ providedIn: 'root' })
export class FeatureFlagService {
  private readonly http = inject(HttpClient);
  private readonly apiUrl = inject(FEATURE_FLAG_API_URL);
  private readonly transferState = inject(TransferState);
  private readonly platformId = inject(PLATFORM_ID);

  readonly flags = signal<AppFlags>(DEFAULT_FLAGS);

  isEnabled(flagName: keyof AppFlags): boolean {
    return this.flags()[flagName] ?? false;
  }

  flag(flagName: keyof AppFlags) {
    return computed(() => this.flags()[flagName] ?? false);
  }

  loadFlags(): Observable<boolean> {
    // On the browser, check if the server already fetched the flags
    if (isPlatformBrowser(this.platformId)) {
      const hasTransferredFlags = this.transferState.hasKey(FLAGS_STATE_KEY);
      if (hasTransferredFlags) {
        const transferred = this.transferState.get(FLAGS_STATE_KEY, DEFAULT_FLAGS);
        this.flags.set({ ...DEFAULT_FLAGS, ...transferred });
        this.transferState.remove(FLAGS_STATE_KEY);
        return of(true);
      }
    }

    return this.http.get<Record<string, boolean>>(this.apiUrl).pipe(
      timeout(5000),
      map((data) => {
        const merged = { ...DEFAULT_FLAGS, ...data };
        this.flags.set(merged);
        // On the server, store flags for the client to pick up
        if (!isPlatformBrowser(this.platformId)) {
          this.transferState.set(FLAGS_STATE_KEY, merged);
        }
        return true;
      }),
      catchError((err) => {
        console.warn(
          '[FeatureFlagService] Failed to load flags. Using defaults.',
          err
        );
        return of(true);
      })
    );
  }
}

With this, the server fetches once, serializes the flags into the HTML, and the client picks them up without a second HTTP request. No hydration mismatch. No double fetch.

Testing

Angular Testing for AWS AppConfig Feature Flags
Simple Regular Angular Testing for Feature Flags (AI Generated)

Testing the service is straightforward because we used an injection token for the URL. No need to mock AWS. No need to run a Lambda locally. Your test provides a URL, the HttpClientTestingModule intercepts it, and you assert on the signal value.

// src/app/core/feature-flag.service.spec.ts
import { TestBed } from '@angular/core/testing';
import {
  HttpTestingController,
  provideHttpClientTesting,
} from '@angular/common/http/testing';
import { provideHttpClient } from '@angular/common/http';
import { FeatureFlagService, AppFlags } from './feature-flag.service';
import { FEATURE_FLAG_API_URL } from './feature-flag.tokens';
import { firstValueFrom } from 'rxjs';

describe('FeatureFlagService', () => {
  let service: FeatureFlagService;
  let httpTesting: HttpTestingController;

  beforeEach(() => {
    TestBed.configureTestingModule({
      providers: [
        provideHttpClient(),
        provideHttpClientTesting(),
        { provide: FEATURE_FLAG_API_URL, useValue: '/mock-flags' },
      ],
    });

    service = TestBed.inject(FeatureFlagService);
    httpTesting = TestBed.inject(HttpTestingController);
  });

  afterEach(() => httpTesting.verify());

  it('should populate flags from the API response', async () => {
    const loadPromise = firstValueFrom(service.loadFlags());

    const req = httpTesting.expectOne('/mock-flags');
    req.flush({ newDashboardWidget: true, maintenanceMode: false });

    await loadPromise;

    expect(service.flags().newDashboardWidget).toBe(true);
    expect(service.flags().maintenanceMode).toBe(false);
  });

  it('should use defaults when the API fails', async () => {
    const loadPromise = firstValueFrom(service.loadFlags());

    const req = httpTesting.expectOne('/mock-flags');
    req.error(new ProgressEvent('Network error'));

    await loadPromise;

    expect(service.flags().newDashboardWidget).toBe(false);
    expect(service.flags().maintenanceMode).toBe(false);
  });

  it('should merge partial responses with defaults', async () => {
    const loadPromise = firstValueFrom(service.loadFlags());

    const req = httpTesting.expectOne('/mock-flags');
    // Server only knows about one flag
    req.flush({ maintenanceMode: true });

    await loadPromise;

    expect(service.flags().maintenanceMode).toBe(true);
    // newDashboardWidget was not in the response, falls back to default
    expect(service.flags().newDashboardWidget).toBe(false);
  });

  it('should return true from isEnabled for an active flag', async () => {
    const loadPromise = firstValueFrom(service.loadFlags());

    const req = httpTesting.expectOne('/mock-flags');
    req.flush({ newDashboardWidget: true, maintenanceMode: false });

    await loadPromise;

    expect(service.isEnabled('newDashboardWidget')).toBe(true);
    expect(service.isEnabled('maintenanceMode')).toBe(false);
  });
});

For component tests, provide a lightweight stub for the FeatureFlagService and skip the HTTP layer entirely. We cannot use new FeatureFlagService() directly because the real service uses inject() field initializers, which only work inside an Angular injection context. Instead, we create a plain object that satisfies the same public API:

const flagsSignal = signal<AppFlags>({
  newDashboardWidget: true,
  maintenanceMode: false,
});

TestBed.configureTestingModule({
  imports: [DashboardComponent],
  providers: [
    {
      provide: FeatureFlagService,
      useValue: {
        flags: flagsSignal,
        isEnabled: (name: keyof AppFlags) => flagsSignal()[name] ?? false,
        flag: (name: keyof AppFlags) => computed(() => flagsSignal()[name] ?? false),
        loadFlags: () => of(true),
      },
    },
  ],
});

The Cost

This is the section where people expect to find a catch. There isn't one, at this scale.

Lambda Function URLs do not have a per-request charge beyond standard Lambda pricing. You pay for compute time. A handler that reads from localhost and returns JSON executes in under 10ms. At 128MB memory, that costs $0.0000002 per invocation. One million requests cost $0.20. The first million per month are free.

AppConfig retrievals cost $0.0008 per call. But the Lambda Extension caches locally. If your poll interval is 60 seconds and you have one Lambda container running, that is 1 retrieval per minute — 43,200 per month. Cost: $0.03. Even with 10 concurrent containers across scaling events, it is $0.35/month.

Browser caching eliminates repeat requests from the same user session. With Cache-Control: max-age=60, a user navigating between pages does not trigger new Lambda invocations. The flag response is served from the browser's HTTP cache.

For a mid-sized enterprise app with 100,000 monthly active users, realistic total cost: under $5.00/month. For most teams, the engineering time saved on a single avoided emergency deployment pays for years of this infrastructure.

You can check the official AWS pricing docs for Lambda and AppConfig for the most up-to-date information.


What We Actually Built

A system where:

  • Feature state lives outside the build artifact. Changing a flag requires zero code changes and zero deployments.
  • The Angular app has no AWS dependency. It fetches JSON from a URL. Swap AppConfig for LaunchDarkly tomorrow by changing one Lambda handler and one environment URL. The Angular code does not change.
  • Rollouts are gradual by default. AppConfig deployment strategies ensure that a misconfigured flag does not hit all users at once.
  • Rollbacks are automatic. A CloudWatch alarm tied to your backend error rate will revert a bad configuration without human intervention.
  • The frontend is resilient at every layer. Lambda returns safe defaults if the extension fails. Angular uses safe defaults if the Lambda fails. The browser cache serves stale data if the network fails. Every failure mode degrades gracefully rather than catastrophically.
  • Flags are reactive. Signal-based state means components re-render when flags change, whether from initial load or runtime polling. No ceremony.

The next time someone messages you at 4:45 PM on a Friday asking you to turn off a feature, you open the AppConfig console, toggle the flag, choose the AllAtOnce deployment strategy, and close your laptop. The change propagates in under 2 minutes. No build. No deploy. No prayer Needed.