GCP · PCA

Service Confusion — GCP Professional Cloud Architect (PCA)

You picked the right service category but the wrong specific service. The exam tests precise service selection, not general knowledge.

Both Connect VPCs — Only One Controls Subnets

The scenario says 'connect resources across two projects.' VPC Network Peering looks correct because it does exactly that. But peering is non-transitive and delegates subnet ownership to each project independently — the moment the scenario mentions a central network team, org-wide firewall rules, or more than two projects needing reachability, peering breaks. Shared VPC exists precisely because subnet control must stay with one host project, not be distributed across service projects.

30%of exam questions affected (60 of 200)

The Scenario

The question describes a straightforward workload and offers two services from the same category — one designed for high-throughput, complex scenarios and one designed for simple, cost-effective use cases. You pick the more capable one because it can do everything the simpler one can. But the scenario specifies "minimal complexity" and "cost-effective." The exam rewards matching capability to requirements, not maximizing capability.

How to Spot It

  • Cloud providers deliberately offer overlapping services. When the service name sounds like a perfect match, verify the non-functional requirements do not point to a simpler alternative.
  • The exam tests precision. A managed queue, a stream processor, and an event bus all "move messages" but solve different problems. Match the communication pattern to the right service.
  • If you are choosing between two services in the same category, the differentiator is usually cost model, throughput guarantee, processing semantics, or operational overhead.

Decision Rules

Whether to orchestrate the train-evaluate-promote workflow with Vertex AI Pipelines (which integrates natively with Vertex ML Metadata and Vertex AI Model Registry, recording every artifact transition as a lineage-linked object) or with Cloud Composer (which can trigger the same Vertex AI jobs as operators but records no ML artifact semantics and cannot satisfy the stated lineage requirement).

Vertex AI PipelinesVertex AI Model RegistryCloud Composer

When the requirement is grounded retrieval Q&A over an existing document corpus with no ML engineering capacity and a short delivery window, select Vertex AI Agent Builder — which delivers managed retrieval, chunking, ranking, and grounding as prebuilt infrastructure — over direct Gemini API calls that require building those components or Natural Language API that provides per-document text analysis rather than cross-corpus Q&A.

Vertex AI Agent BuilderGemini (via Vertex AI API)Natural Language API

Whether automating node management alone (GKE Autopilot) satisfies the 'no Kubernetes expertise' constraint, or whether eliminating the Kubernetes control plane entirely (Cloud Run) is required when the workload is stateless, HTTP-driven, and the team cannot absorb pod-spec or cluster lifecycle complexity.

Cloud RunGKE AutopilotApp Engine Standard

Whether the multi-region external consistency requirement eliminates Cloud SQL and AlloyDB (both regional ACID-compliant, neither providing global external consistency) in favor of Cloud Spanner (multi-region configuration with 99.999% SLA and external consistency).

Cloud SpannerCloud SQLAlloyDB

Select Cloud Spanner over Cloud SQL when a relational OLTP workload requires externally consistent ACID transactions spanning multiple geographic regions at scale, because Cloud SQL is a regional single-primary service incapable of providing multi-region external consistency regardless of replication configuration.

Cloud SpannerCloud SQLBigtable

When the workload requires externally consistent ACID transactions that span multiple GCP regions and supports complex relational SQL queries, Cloud Spanner is the required service; Cloud SQL is disqualified because it is a single-region primary service and cannot provide native multi-region external consistency regardless of read-replica count.

Cloud SpannerCloud SQLFirestore

Select Dataflow over Dataproc when the pipeline is stateless streaming with no Hadoop or Spark ecosystem dependency and the team cannot sustain cluster management overhead — Dataflow's serverless Apache Beam execution model satisfies both the streaming execution requirement and the zero-ops-cluster constraint simultaneously.

Pub/SubDataflowDataproc

Whether the combination of a streaming execution requirement and a no-cluster-admin operational constraint selects Dataflow (serverless, native streaming windowing, managed runner) over Dataproc (Hadoop/Spark cluster, requires provisioning and lifecycle management, appropriate for existing Spark ecosystem workloads).

Pub/SubDataflowDataproc

When the data flow is continuous and streaming (Pub/Sub source, sub-minute latency, stateful windowed aggregation) and the team has no cluster management capacity, choose Dataflow over Dataproc — Dataflow's Apache Beam streaming runner is native to this execution model and requires no cluster provisioning, while Dataproc's Spark Streaming capability requires cluster lifecycle ownership the team cannot provide.

Pub/SubDataflowDataproc

When a single network team must own subnet allocation and hold non-delegatable firewall authority across multiple GCP projects, choose Shared VPC (host/service project model); VPC Network Peering is disqualified because it is bilateral, non-transitive, and leaves each VPC owner in control of its own subnets and firewall rules.

Shared VPCVPC Network PeeringNetwork Connectivity Center

Whether the network-ownership-boundary constraint — central team must author all firewall rules, service projects must not — mandates Shared VPC (host/service-project model with subnet delegation via Network User IAM) over VPC Network Peering (bilateral, project-autonomous, non-transitive).

Shared VPCVPC Network PeeringNetwork Connectivity Center

When a service producer must publish a private endpoint to multiple consumer VPCs across GCP organizational boundaries where IP ranges are unknown or overlapping, choose Private Service Connect over VPC Network Peering because PSC isolates network namespaces via a forwarding-rule endpoint, imposes no CIDR constraints, and does not grant consumers visibility into the producer VPC — whereas peering requires non-overlapping RFC-1918 space and exposes the full producer VPC bidirectionally.

Private Service ConnectVPC Network PeeringShared VPC

Whether the network ownership boundary requirement — a single team controlling subnet allocation and firewall enforcement across multiple projects — is satisfied by Shared VPC's host/service-project model or by VPC Network Peering's bilateral, per-VPC-autonomous topology.

Shared VPCVPC Network PeeringNetwork Connectivity Center

Which hybrid connectivity option meets the combined constraints of sustained 8 Gbps throughput, Google-backbone routing, and 99.99% SLA when the customer already has colocation presence at a Google exchange point?

Dedicated InterconnectHA VPNPartner Interconnect

Whether the database fault-domain scope (regional single-primary with HA standby versus multi-region synchronous replication) matches the stated five-nines availability target and region-failure tolerance requirement.

Cloud Spanner (multi-region)Cloud SQL HAManaged Instance Groups (MIG)

Domain Coverage

Designing and Planning a Cloud Solution ArchitectureManaging and Provisioning a Solution InfrastructureAnalyzing and Optimizing Technical and Business ProcessesEnsuring Solution and Operations Reliability

Difficulty Breakdown

Medium: 20Expert: 16Hard: 24

Related Patterns