A WebLLM project using React, Vite & TailwindCSS. This project is a simple example of how to use WebLLM with Components.
WebLLM is a high-performance in-browser LLM inference engine that brings language model inference directly onto web browsers with hardware acceleration. Everything runs inside the browser with no server support and is accelerated with WebGPU.
# Clone the repository
git clone https://github.com/sammwyy/webllm.git
# Change directory
cd webllm
# Install dependencies
bun install
# Start the development server
bun dev
Made with ❤️ by Sammwy
bc1q4uzvtx6nsgt7pt7678p9rqel4hkhskpxvck8uq
0x7a70a0C1889A9956460c3c9DCa8169F25Bb098af
7UcE4PzrHoGqFKHyVgsme6CdRSECCZAoWipsHntu5rZx