The GenerativeModelPreview class is the base class for the generative models that are in preview. NOTE: Don't instantiate this class directly. Use vertexai.preview.getGenerativeModel() instead.
The CountTokensResponse object with the token count.
Example
constrequest={contents:[{role:'user',parts:[{text:'How are you doing today?'}]}],};constresp=awaitgenerativeModelPreview.countTokens(request);console.log('count tokens response: ',resp);
The GenerateContentResponse object with the response candidates.
Example
constrequest={contents:[{role:'user',parts:[{text:'How are you doing today?'}]}],};constresult=awaitgenerativeModelPreview.generateContent(request);console.log('Response: ',JSON.stringify(result.response));
constrequest={contents:[{role:'user',parts:[{text:'How are you doing today?'}]}],};conststreamingResult=awaitgenerativeModelPreview.generateContentStream(request);forawait(constitemofstreamingResult.stream){console.log('stream chunk: ',JSON.stringify(item));}constaggregatedResponse=awaitstreamingResult.response;console.log('aggregated response: ',JSON.stringify(aggregatedResponse));
constchat=generativeModelPreview.startChat();constresult1=awaitchat.sendMessage("How can I learn more about Node.js?");constresponse1=awaitresult1.response;console.log('Response: ',JSON.stringify(response1));constresult2=awaitchat.sendMessageStream("What about python?");constresponse2=awaitresult2.response;console.log('Response: ',JSON.stringify(awaitresponse2));
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-28 UTC."],[],[],null,["# Class GenerativeModelPreview (1.10.0)\n\nVersion latestkeyboard_arrow_down\n\n- [1.10.0 (latest)](/nodejs/docs/reference/vertexai/latest/vertexai/generativemodelpreview)\n- [1.9.0](/nodejs/docs/reference/vertexai/1.9.0/vertexai/generativemodelpreview)\n- [1.8.1](/nodejs/docs/reference/vertexai/1.8.1/vertexai/generativemodelpreview)\n- [1.7.0](/nodejs/docs/reference/vertexai/1.7.0/vertexai/generativemodelpreview)\n- [1.6.0](/nodejs/docs/reference/vertexai/1.6.0/vertexai/generativemodelpreview)\n- [1.5.0](/nodejs/docs/reference/vertexai/1.5.0/vertexai/generativemodelpreview)\n- [1.4.1](/nodejs/docs/reference/vertexai/1.4.1/vertexai/generativemodelpreview)\n- [1.3.1](/nodejs/docs/reference/vertexai/1.3.1/vertexai/generativemodelpreview)\n- [1.2.0](/nodejs/docs/reference/vertexai/1.2.0/vertexai/generativemodelpreview)\n- [1.1.0](/nodejs/docs/reference/vertexai/1.1.0/vertexai/generativemodelpreview)\n- [1.0.0](/nodejs/docs/reference/vertexai/1.0.0/vertexai/generativemodelpreview)\n- [0.5.0](/nodejs/docs/reference/vertexai/0.5.0/vertexai/generativemodelpreview)\n- [0.4.0](/nodejs/docs/reference/vertexai/0.4.0/vertexai/generativemodelpreview)\n- [0.3.1](/nodejs/docs/reference/vertexai/0.3.1/vertexai/generativemodelpreview)\n- [0.2.1](/nodejs/docs/reference/vertexai/0.2.1/vertexai/generativemodelpreview) \nThe `GenerativeModelPreview` class is the base class for the generative models that are in preview. NOTE: Don't instantiate this class directly. Use `vertexai.preview.getGenerativeModel()` instead.\n\nPackage\n-------\n\n[@google-cloud/vertexai](../overview.html)\n\nConstructors\n------------\n\n### (constructor)(getGenerativeModelParams)\n\n constructor(getGenerativeModelParams: GetGenerativeModelParams);\n\nConstructs a new instance of the `GenerativeModelPreview` class\n\nMethods\n-------\n\n### countTokens(request)\n\n countTokens(request: CountTokensRequest): Promise\u003cCountTokensResponse\u003e;\n\nMakes an async request to count tokens.\n\nThe `countTokens` function returns the token count and the number of billable characters for a prompt.\n\n**Example** \n\n\n const request = {\n contents: [{role: 'user', parts: [{text: 'How are you doing today?'}]}],\n };\n const resp = await generativeModelPreview.countTokens(request);\n console.log('count tokens response: ', resp);\n\n### generateContent(request)\n\n generateContent(request: GenerateContentRequest | string): Promise\u003cGenerateContentResult\u003e;\n\nMakes an async call to generate content.\n\nThe response will be returned in [GenerateContentResult.response](/nodejs/docs/reference/vertexai/latest/vertexai/generatecontentresult).\n\n**Example** \n\n\n const request = {\n contents: [{role: 'user', parts: [{text: 'How are you doing today?'}]}],\n };\n const result = await generativeModelPreview.generateContent(request);\n console.log('Response: ', JSON.stringify(result.response));\n\n### generateContentStream(request)\n\n generateContentStream(request: GenerateContentRequest | string): Promise\u003cStreamGenerateContentResult\u003e;\n\nMakes an async stream request to generate content.\n\nThe response is returned chunk by chunk as it's being generated in [StreamGenerateContentResult.stream](/nodejs/docs/reference/vertexai/latest/vertexai/streamgeneratecontentresult). After all chunks of the response are returned, the aggregated response is available in [StreamGenerateContentResult.response](/nodejs/docs/reference/vertexai/latest/vertexai/streamgeneratecontentresult).\n\n**Example** \n\n\n const request = {\n contents: [{role: 'user', parts: [{text: 'How are you doing today?'}]}],\n };\n const streamingResult = await generativeModelPreview.generateContentStream(request);\n for await (const item of streamingResult.stream) {\n console.log('stream chunk: ', JSON.stringify(item));\n }\n const aggregatedResponse = await streamingResult.response;\n console.log('aggregated response: ', JSON.stringify(aggregatedResponse));\n\n### getCachedContent()\n\n getCachedContent(): CachedContent | undefined;\n\n### getModelName()\n\n getModelName(): string;\n\n### getSystemInstruction()\n\n getSystemInstruction(): Content | undefined;\n\n### startChat(request)\n\n startChat(request?: StartChatParams): ChatSessionPreview;\n\nInstantiates a [ChatSessionPreview](/nodejs/docs/reference/vertexai/latest/vertexai/chatsessionpreview).\n\nThe [ChatSessionPreview](/nodejs/docs/reference/vertexai/latest/vertexai/chatsessionpreview) class is a stateful class that holds the state of the conversation with the model and provides methods to interact with the model in chat mode. Calling this method doesn't make any calls to a remote endpoint. To make remote call, use [ChatSessionPreview.sendMessage()](/nodejs/docs/reference/vertexai/latest/vertexai/chatsessionpreview) or [ChatSessionPreview.sendMessageStream()](/nodejs/docs/reference/vertexai/latest/vertexai/chatsessionpreview).\n\n**Example** \n\n\n const chat = generativeModelPreview.startChat();\n const result1 = await chat.sendMessage(\"How can I learn more about Node.js?\");\n const response1 = await result1.response;\n console.log('Response: ', JSON.stringify(response1));\n\n const result2 = await chat.sendMessageStream(\"What about python?\");\n const response2 = await result2.response;\n console.log('Response: ', JSON.stringify(await response2));"]]