Prompt Engineering Using Gemini API
Setup
from google import genai
from google.genai import types
client = genai.Client(api_key=GOOGLE_API_KEY)
Basic Usage
response = client.models.generate_content(
model="gemini-2.0-flash",
contents="Explain AI to me like I'm a kid."
)
print(response.text)
Interactive Chat
chat = client.chats.create(model='gemini-2.0-flash', history=[])
response = chat.send_message('Hello! My name is Zlork.')
print(response.text)
Listing Available Models
for model in client.models.list():
print(model.name)
Model Output Settings
# Max output tokens
short_config = types.GenerateContentConfig(max_output_tokens=200)
# High temperature for creative responses
high_temp_config = types.GenerateContentConfig(temperature=2.0)
response = client.models.generate_content(
model='gemini-2.0-flash',
config=short_config,
contents='What could be done with a 1000 dollars and no idea...'
)
print(response.text)
Function Calling with Gemini
- Docs: Function Calling Guide
tools = types.Tool(function_declarations=[set_light_values_declaration])
config = types.GenerateContentConfig(tools=[tools])
MIME Type Example for Structured Output
config = types.GenerateContentConfig(
temperature=0.1,
response_mime_type="application/json",
response_schema=PizzaOrder,
)
Chain-of-Thought Prompting
Chain-of-thought prompting helps improve output quality by asking the model to show reasoning steps.
This technique improves factual grounding but may increase token costs.
Code Execution Tool (Gemini)
config = types.GenerateContentConfig(
tools=[types.Tool(code_execution=types.ToolCodeExecution())],
)