With today's AI advancements, it's easy to setup a generative AI model on your computer to create a chatbot.
In this article we will see how a you can setup a chatbot on your system using Ollama and Next.js
Let's start by setting up Ollama on our system. Visit ollama.com and download it for your OS. This will allow us to use ollama command in the terminal/command prompt.
Check Ollama version by using command ollama -v
Check out the list of models on Ollama library page.
To download and run a model, run command ollama run
Example: ollama run llama3.1 or ollama run gemma2
You will be able to chat with the model right in the terminal.
There are few npm packages that needs to be installed to use the ollama.
To install these dependencies run npm i ai ollama ollama-ai-provider.
Under app/src there is a file named page.tsx.
Let's remove everything in it and start with the basic functional component:
src/app/page.tsx
export default function Home() { return ({/* Code here... */} ); }
Let's start by importing useChat hook from ai/react and react-markdown
"use client"; import { useChat } from "ai/react"; import Markdown from "react-markdown";
Because we are using a hook, we need to convert this page to to a client component.
Tip: You can create a separate component for chat and call it in the page.tsx for limiting client component usage.
In the component get messages, input, handleInputChange and handleSubmit from useChat hook.
const { messages, input, handleInputChange, handleSubmit } = useChat();
In JSX, create an input form to get the user input in order to initiate conversation.
The good think about this is we don't need to right the handler or maintain a state for input value, the useChat hook provide it to us.
We can display the messages by looping through the messages array.
messages.map((m, i) => ({m})
The styled version based on the role of the sender looks like this:
{messages.length ? ( messages.map((m, i) => { return m.role === "user" ? (You) : ({m.content} AI); }) ) : ({m.content} )}Local AI Chat
Let's take a look at the whole file
src/app/page.tsx
"use client"; import { useChat } from "ai/react"; import Markdown from "react-markdown"; export default function Home() { const { messages, input, handleInputChange, handleSubmit } = useChat(); return (); }
With this, the frontend part is complete. Now let's handle the API.
Let's start by creating route.ts inside app/api/chat.
Based on the Next.js naming convention, it will allow us to handle the requests on localhost:3000/api/chat endpoint.
src/app/api/chat/route.ts
import { createOllama } from "ollama-ai-provider"; import { streamText } from "ai"; const ollama = createOllama(); export async function POST(req: Request) { const { messages } = await req.json(); const result = await streamText({ model: ollama("llama3.1"), messages, }); return result.toDataStreamResponse(); }
The above code is basically using the ollama and vercel ai to stream the data back as response.
Run npm run dev to start the server in the development mode.
Open the browser and go to localhost:3000 to see the results.
If everything is configured properly, you will be able to talk to your very own chatbot.
You can find the source code here: https://github.com/parasbansal/ai-chat
Let me know if you have any questions in the comments, I'll try to answer those.
Disclaimer: All resources provided are partly from the Internet. If there is any infringement of your copyright or other rights and interests, please explain the detailed reasons and provide proof of copyright or rights and interests and then send it to the email: [email protected] We will handle it for you as soon as possible.
Copyright© 2022 湘ICP备2022001581号-3