await Inside Array.map(): The Silent Performance Killer in AI Code - VibeDoctor 
← All Articles 🐌 Performance Anti-Patterns High

await Inside Array.map(): The Silent Performance Killer in AI Code

AI tools put await inside .map() creating sequential execution instead of parallel. Learn the fix with Promise.all().

PERF-001 PERF-005

Quick Answer

Using await inside Array.map() does not run operations in parallel - it executes them one after another because each iteration waits before starting the next. The fix is to wrap the map in Promise.all(), which fires all operations simultaneously and waits for all of them to finish. This single change can reduce response times by 5-20x on typical API handlers.

How await Inside .map() Actually Executes

The mistake is a subtle misunderstanding of how JavaScript's async model works. When you write array.map(async item => await doSomething(item)), each callback does start an async operation - but the callback returns a Promise immediately and .map() moves to the next item. Without Promise.all(), nothing is actually waiting on those Promises. If you add await inside the callback, you are awaiting inside that individual callback's execution, which means each callback waits for its own operation before finishing - but .map() does not wait for any of them.

The result depends on exactly how you write it. There are two common patterns, both wrong in different ways:

Pattern 1 - Silent bug, no parallelism, partial results: You await array.map() directly. This awaits a plain array of Promises, not a single Promise - so the await resolves immediately with the array of still-pending Promises. You process unresolved data.

Pattern 2 - Sequential execution: You use for...of or chain await inside the callback in a way that forces serial execution. This is what Cursor and Bolt most commonly generate.

According to SonarSource's JavaScript performance analysis, sequential async iteration is present in over 28% of AI-generated API handlers, making it one of the most common performance anti-patterns in vibe-coded applications.

The Exact Code AI Generates

Here is what tools like Cursor, Bolt, and Lovable produce when asked to fetch user data for a list of IDs:

// ❌ BAD - Sequential execution (PERF-001)
// Each fetch waits for the previous one to complete
export async function getUsersSequential(userIds: string[]) {
  const users = await Promise.all(
    userIds.map(async (id) => {
      // This await inside map means each iteration runs serially
      const user = await fetch(`/api/users/${id}`).then(r => r.json());
      const profile = await fetch(`/api/profiles/${id}`).then(r => r.json());
      return { ...user, ...profile };
    })
  );
  return users;
}
// For 10 users: 10 × (fetch1 + fetch2) = ~20 sequential round trips
// Typical time: 200ms × 20 = ~4000ms

This pattern looks like it should be parallel - it uses Promise.all() at the outer level. But the inner awaits are sequential: for each user, it fetches the user record, waits for it to complete, then fetches the profile. The outer Promise.all() does run the per-user processing concurrently, but within each user's processing, the two fetches are sequential.

The Fix: True Parallelism with Promise.all()

// ✅ GOOD - Full parallelism (PERF-001 + PERF-005 resolved)
export async function getUsersParallel(userIds: string[]) {
  const users = await Promise.all(
    userIds.map(async (id) => {
      // Both fetches fire simultaneously for each user
      const [user, profile] = await Promise.all([
        fetch(`/api/users/${id}`).then(r => r.json()),
        fetch(`/api/profiles/${id}`).then(r => r.json()),
      ]);
      return { ...user, ...profile };
    })
  );
  return users;
}
// For 10 users: all 20 fetches fire in parallel
// Typical time: ~200ms (one round-trip latency, not 20)

The outer Promise.all() runs each user's processing in parallel. The inner Promise.all() runs both fetches for each user in parallel. The total wall-clock time equals the time of the slowest single operation, not the sum of all operations.

For 10 users with two 200ms fetches each, the sequential version takes approximately 4,000ms. The fully parallel version takes approximately 200ms. That is a 20x improvement with a two-line change.

When Sequential Execution Is Intentional

Not every sequential async loop is a bug. There are legitimate cases where you need operations to run in order:

Pattern Use When Code
Sequential (for...of + await) Each operation depends on the previous result; rate-limited APIs; ordered inserts for (const item of items) { await process(item); }
Parallel (Promise.all) Independent operations; fetching multiple records; sending multiple notifications await Promise.all(items.map(async item => process(item)))
Controlled concurrency (p-limit) Parallel but rate-limited; database connection pool limits; external API quotas const limit = pLimit(5); await Promise.all(items.map(item => limit(() => process(item))))
Streaming (for await...of) Processing a stream; paginated API responses; large datasets for await (const chunk of stream) { process(chunk); }

The key question is: does item B need item A's result before it can start? If yes, sequential is correct. If no, parallelize. AI tools default to sequential because the code is more readable and easier to reason about step-by-step - but that reasoning prioritizes code clarity over runtime performance.

The PERF-005 Variant: Unnecessary Sequential Await Chains

PERF-005 covers a related pattern: sequential await chains outside loops where operations are independent. AI-generated Next.js API routes frequently produce this:

// ❌ BAD - Sequential independent operations (PERF-005)
export async function GET(req: Request) {
  const user = await getUser(req);        // 50ms
  const settings = await getSettings();   // 40ms
  const features = await getFeatures();   // 30ms
  // Total: ~120ms - each waits for the previous

  return Response.json({ user, settings, features });
}

// ✅ GOOD - Parallel independent operations
export async function GET(req: Request) {
  const [user, settings, features] = await Promise.all([
    getUser(req),      // all three fire simultaneously
    getSettings(),
    getFeatures(),
  ]);
  // Total: ~50ms - time of the slowest operation

  return Response.json({ user, settings, features });
}

GitHub's analysis of JavaScript codebases found that sequential independent awaits appear in 41% of async functions in repositories with over 1,000 stars. The pattern is pervasive even in experienced teams' code, and AI tools amplify it by generating straightforward sequential code.

How to Find This Pattern in Your Codebase

Manual code review works for small files but does not scale. Here are automated approaches:

  1. ESLint rule: The no-await-in-loop rule catches awaits inside for loops. It does not cover .map() callbacks - you need the github/no-then plugin or a custom rule for that.
  2. Search your codebase: Look for the pattern async.*=>.*await inside .map( calls. Any match where the operations are independent is a candidate for Promise.all().
  3. Automated scanning: Tools like VibeDoctor (vibedoctor.io) automatically scan your codebase for sequential async patterns (PERF-001 and PERF-005) and flag specific file paths and line numbers. Free to sign up.
  4. Performance profiling: In Node.js, use the --prof flag or clinic.js to profile your server. Sequential awaits show up as staircase patterns in async flame graphs.

According to Veracode's 2024 performance benchmarks, fixing sequential async patterns in API routes reduces p95 response times by an average of 340ms in production applications - a significant improvement for user-perceived performance.

FAQ

Does Promise.all() fail if one operation throws?

Yes. If any Promise in the array rejects, Promise.all() rejects immediately with that error, and the other Promises continue running but their results are discarded. If you need all operations to complete regardless of individual failures, use Promise.allSettled() instead. It always resolves with an array of result objects, each with a status of 'fulfilled' or 'rejected'.

Can running too many Promises in parallel cause problems?

Yes. Firing hundreds of database queries or HTTP requests simultaneously can overwhelm connection pools or trigger rate limits. For large arrays, use a concurrency limiter like the p-limit package. A limit of 5-10 concurrent operations is a reasonable default for most APIs and database pools.

Is for...of with await always bad?

No. For sequential dependencies - like processing a payment pipeline where each step depends on the previous - for...of with await is exactly right. The issue is using it for independent operations where parallelism is safe. The PERF-001 check specifically looks for cases where the loop body does not use the result of one iteration to inform the next.

Does this apply to React Server Components?

Yes. React supports parallel data fetching in Server Components using Promise.all() - the same pattern applies. In fact, React's documentation explicitly recommends initiating multiple data fetches in parallel using Promise.all() rather than awaiting them sequentially, because sequential awaits in Server Components create request waterfalls that delay Time to First Byte.

What is the difference between PERF-001 and PERF-005?

PERF-001 specifically targets await inside iterator callbacks (.map(), .forEach(), for...of) where parallelism is safe. PERF-005 targets sequential await chains at the function level - multiple await statements in a row where the operations are independent and could be batched into a single Promise.all(). Both patterns waste wall-clock time.

Scan your codebase for this issue - free

VibeDoctor checks for PERF-001, PERF-005 and 128 other issues across 15 diagnostic areas.

SCAN MY APP →
← Back to all articles View all 129+ checks →