AI And ML Platform Selection — GCP Professional Cloud Architect (PCA)
ML Maturity and Customization Depth Pick the Right Tier
Architecture requirement: deliver an ML capability at the team's stated maturity level with a defined customization need. Competing choices: prebuilt AI APIs, Model Garden foundation models, Vertex AI fine-tuning, fully custom Vertex AI Pipelines or AI Hypercomputer. The deciding constraint is how much labeled data and ML engineering investment the team brings. No data and no ML team resolves to a prebuilt API. A labeled domain dataset resolves to fine-tuning. Full control over training architecture resolves to Vertex AI Pipelines. The scenario's team capability description is the selector.
What This Pattern Tests
The exam gives you an ML requirement and tests whether you match it to the right abstraction level. Pre-built APIs (Vision AI, Natural Language API, Translation, Speech-to-Text) handle standard tasks with zero training data — use them when your problem matches a well-defined category. AutoML on Vertex AI trains custom classifiers with labeled data but no ML code — use it when pre-built APIs don't fit your domain. Vertex AI custom training with TensorFlow or PyTorch handles novel architectures and large-scale experiments. BigQuery ML runs logistic regression, boosted trees, and neural networks directly in SQL — use it when data already lives in BigQuery and the team doesn't want to move data. The trap is choosing custom Vertex AI training for a problem a pre-built API already solves, or reaching for pre-built APIs when the domain is too specific.
Decision Axis
Data availability and customization need. No training data → pre-built API. Labeled data, no ML code → AutoML. Full control or novel architecture → Vertex AI custom. Data in BigQuery, SQL team → BigQuery ML.
Associated Traps
More Top Traps on This Exam
Decision Rules
Whether to orchestrate the train-evaluate-promote workflow with Vertex AI Pipelines (which integrates natively with Vertex ML Metadata and Vertex AI Model Registry, recording every artifact transition as a lineage-linked object) or with Cloud Composer (which can trigger the same Vertex AI jobs as operators but records no ML artifact semantics and cannot satisfy the stated lineage requirement).
When the document type and extraction goal match a prebuilt Document AI processor, use the prebuilt processor rather than building a custom model via Vertex AI Training — the absence of labeled training data and the existence of a matching prebuilt capability are jointly sufficient to eliminate the custom-training path.
Whether the training workload's scale (single multi-GPU node, well below 1000 chips) and artifact registration requirement are satisfied by Vertex AI Training's managed custom jobs, or whether AI Hypercomputer's ultra-scale fabric is warranted — the answer hinges on the 1000+ chip threshold that triggers AI Hypercomputer's value proposition.
When the requirement is grounded retrieval Q&A over an existing document corpus with no ML engineering capacity and a short delivery window, select Vertex AI Agent Builder — which delivers managed retrieval, chunking, ranking, and grounding as prebuilt infrastructure — over direct Gemini API calls that require building those components or Natural Language API that provides per-document text analysis rather than cross-corpus Q&A.
Domain Coverage
Difficulty Breakdown