Skip to main content
Himanshu Chandola

Building an AI Chat Interface for Your Portfolio

Adding an AI-powered chat interface to my portfolio has been one of the most fun features I've built. It lets visitors ask questions about me, my work, and my experience — all answered by an AI persona trained on my professional background.

In this post, I'll walk you through how I built it using Google's Gemini 3 Flash Preview and the Vercel AI SDK.

The Stack

  • Next.js 16 with Edge Runtime for low-latency responses
  • Vercel AI SDK for streaming responses
  • Google Gemini 3 Flash Preview as the LLM
  • Upstash Redis for rate limiting
  • Zod for tool schema validation

The API Route

The core is a simple Edge function that streams AI responses:

import { createGoogleGenerativeAI } from "@ai-sdk/google";
import { streamText, tool, stepCountIs } from "ai";
import { z } from "zod";

export const runtime = "edge";

const google = createGoogleGenerativeAI({
  apiKey: process.env.GOOGLE_API_KEY ?? "",
});

export async function POST(request: Request) {
  const { messages } = await request.json();

  const result = streamText({
    model: google("gemini-3-flash-preview"),
    system: aboutMe(), // Your persona prompt
    temperature: 0.5,
    messages: messages,
  });

  return result.toTextStreamResponse();
}

The streamText function handles all the complexity of streaming — you just return result.toTextStreamResponse() and the AI SDK does the rest.

Creating Your AI Persona

The magic is in the system prompt. I created an aboutMe() function that returns a comprehensive prompt including:

  • Basic info: Name, role, location, social links
  • Work experience: Current and past companies
  • Technical skills: Languages, frameworks, tools
  • Projects: Portfolio highlights with links
  • Personality guidelines: Tone, language, boundaries

Here's a snippet of the structure:

export default function aboutMe(): string {
  return `
You are Himanshu's AI Persona. Your role is to assist 
with queries about Himanshu's life and work.

Guidelines:
- Keep responses concise and relevant
- Maintain a casual, friendly tone
- Don't share private information
- Bold important words for emphasis

Current Role: Software Engineer at Exiliensoft
Experience: 3+ years in full-stack development
Skills: TypeScript, React, Next.js, React Native, Node.js
...
`;
}

Adding Tools for Actions

The AI SDK supports tools — functions the AI can call to perform actions. I added three:

tools: {
  connectWithHimanshu: tool({
    description: "Opens LinkedIn profile for connecting",
    inputSchema: z.object({}),
    execute: async () => ({
      action: "open_linkedin",
      url: "https://www.linkedin.com/in/himanshuchandola/",
    }),
  }),
  viewProjects: tool({
    description: "Navigates to projects page",
    inputSchema: z.object({}),
    execute: async () => ({
      action: "navigate",
      url: "/open-source",
    }),
  }),
}

When someone asks to "connect with Himanshu", the AI can invoke the tool and return structured data for the frontend to handle.

Rate Limiting

To prevent abuse, I added rate limiting via Upstash Redis in my proxy (middleware):

const rateLimiter = new Ratelimit({
  redis: Redis.fromEnv(),
  limiter: Ratelimit.slidingWindow(20, "60 s"),
  prefix: "ratelimit:chat",
});

This allows 20 requests per minute per IP — enough for genuine conversations, but blocks spam.

The Frontend

The chat UI is a React component with streaming support:

const response = await fetch("/api/chat", {
  method: "POST",
  body: JSON.stringify({ messages }),
});

const reader = response.body?.getReader();
const decoder = new TextDecoder();

while (true) {
  const { done, value } = await reader.read();
  if (done) break;
  setStreamingContent(prev => prev + decoder.decode(value));
}

This gives you that satisfying "typing" effect as the AI responds.

Key Takeaways

  1. Edge Runtime is fast — responses start in milliseconds
  2. Streaming is essential — nobody wants to wait for a complete response
  3. Tools are powerful — let your AI take actions, not just answer questions
  4. Rate limiting is necessary — APIs get abused quickly
  5. Persona prompts matter — spend time crafting the personality

Try It

Head to himanshuchandola.dev/ai and ask my AI anything about my work. It's been a great way to make my portfolio more interactive and memorable.