Stay organized with collections
Save and categorize content based on your preferences.
Generative AI often relies on large-scale machine learning (ML) models that
are pre-trained on vast amounts of data. These are referred to as foundation
models and serve as a base for various tasks. There are numerous ways to
customize foundation models using Generative AI on Vertex AI:
Tuning: Tuning involves providing a model with a training dataset of
specific examples relevant to the chosen downstream task.
Supervised tuning:
This technique uses labeled examples to fine-tune
a model. Each example demonstrates the chosen output for a given input
during inference. Supervised tuning is effective for tasks where the
expected output isn't overly complex and can be clearly defined, such as
classification, sentiment analysis, entity extraction, summarization of
less complex content, and generating domain-specific queries.
You can tune text, image, audio, and document data types using
supervised learning.
Reinforcement Learning from Human Feedback (RLHF) tuning: This method is
suitable when the selected model output is more complex. RLHF tuning
works well for objectives that aren't easily differentiated through
supervised tuning, such as question answering, summarization of complex
content, and creative content generation.
Distillation: Distillation often involves training a smaller "student" model
to mimic the behavior of a larger, more capable "teacher" model.
Adapter model training: This involves training smaller adapter models
(or layers) that work in conjunction with a foundation model to improve
performance on specialized tasks. The original foundation model's parameters
are often kept frozen, and only the adapter's weights are updated during
training.
Grounding: While not a training method, grounding is a critical aspect of
ensuring the reliability of generative AI outputs.
Grounding involves
connecting the model's output to verifiable sources of information, reducing
the likelihood of invented content. This often involves providing the model
with access to specific data sources during inference.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-29 UTC."],[],[],null,["# Generative AI on Vertex AI training overview\n\nGenerative AI often relies on large-scale machine learning (ML) models that\nare pre-trained on vast amounts of data. These are referred to as foundation\nmodels and serve as a base for various tasks. There are numerous ways to\ncustomize foundation models using Generative AI on Vertex AI:\n\n- Tuning: Tuning involves providing a model with a training dataset of\n specific examples relevant to the chosen downstream task.\n\n - [Supervised tuning](/vertex-ai/generative-ai/docs/models/tune-models#how-tuning-works): This technique uses labeled examples to fine-tune a model. Each example demonstrates the chosen output for a given input during inference. Supervised tuning is effective for tasks where the expected output isn't overly complex and can be clearly defined, such as classification, sentiment analysis, entity extraction, summarization of less complex content, and generating domain-specific queries. You can tune text, image, audio, and document data types using supervised learning.\n - Reinforcement Learning from Human Feedback (RLHF) tuning: This method is suitable when the selected model output is more complex. RLHF tuning works well for objectives that aren't easily differentiated through supervised tuning, such as question answering, summarization of complex content, and creative content generation.\n- Distillation: Distillation often involves training a smaller \"student\" model\n to mimic the behavior of a larger, more capable \"teacher\" model.\n\n- Adapter model training: This involves training smaller adapter models\n (or layers) that work in conjunction with a foundation model to improve\n performance on specialized tasks. The original foundation model's parameters\n are often kept frozen, and only the adapter's weights are updated during\n training.\n\n- Grounding: While not a training method, grounding is a critical aspect of\n ensuring the reliability of generative AI outputs.\n [Grounding](/vertex-ai/generative-ai/docs/grounding/overview) involves\n connecting the model's output to verifiable sources of information, reducing\n the likelihood of invented content. This often involves providing the model\n with access to specific data sources during inference.\n\nWhat's next\n-----------\n\n- [Overview of Generative AI on Vertex AI](/vertex-ai/generative-ai/docs/overview)"]]