Skip to content

Latest commit

 

History

History
108 lines (74 loc) · 4.38 KB

File metadata and controls

108 lines (74 loc) · 4.38 KB
layout default
title 🤖 LLM_Agent - Run AI Locally and Access Real-Time Data
description 🔍 Build a local AI agent that combines powerful LLMs with real-time web search for accurate, up-to-date information—securely on your machine.

🤖 LLM_Agent - Run AI Locally and Access Real-Time Data

Download LLM_Agent

📖 Overview

LLM_Agent is an AI project that lets you run a powerful Large Language Model (Qwen2.5-0.5B) right on your own computer. You don’t need any technical skills to use it. Built with tools like FastAPI and llama-cpp-python, this app allows you to chat naturally and even fetch real-time information from the web in "Search Mode." It features a user-friendly frontend made with HTML, CSS, and JavaScript, and it operates fully within Docker.

🚀 Getting Started

To get started with LLM_Agent, follow these simple steps to download and run the software.

✨ Key Features

  • Run a large language model locally without reliance on the internet after installation.
  • Switch seamlessly between regular chat and a web-enabled search mode.
  • Enjoy a responsive, easy-to-use interface built with modern web technologies.
  • Automatically updates with the latest features and improvements through Docker.

💾 System Requirements

To use LLM_Agent, your computer should meet the following requirements:

  • Operating System: Windows 10 or later, MacOS, or a recent version of Linux.
  • CPU: At least 2 cores (quad-core recommended).
  • RAM: Minimum 8 GB (16 GB recommended).
  • Storage: At least 2 GB of free disk space.
  • Docker: Must have Docker installed. You can download it from Docker's official site.

📥 Download & Install

To get LLM_Agent, visit this page to download: GitHub Releases.

📦 Step-by-Step Installation

  1. Download: Go to the Releases page and download the latest version of LLM_Agent suitable for your system.

  2. Install Docker: If you haven’t done this already, make sure you have Docker installed. Follow the instructions on the Docker website.

  3. Open Terminal/Command Prompt:

    • For Windows, search for "cmd" in the start menu.
    • For Mac, open "Terminal" from your applications.
    • For Linux, you can open Terminal from your application menu.
  4. Navigate to the Download Folder: Use the cd command to navigate to the location where you downloaded LLM_Agent.

    cd path/to/your/download/folder
    
  5. Run LLM_Agent:

    • Type the following command and press Enter:
    docker-compose up
    
    • This will start the application. Wait for a few moments as Docker sets everything up.
  6. Access the Application: Open your web browser and go to http://localhost:8000. This will take you to the LLM_Agent interface.

🧑‍🤝‍🧑 Using LLM_Agent

Once you have LLM_Agent running, you will see a simple interface where you can interact with the AI.

🔍 Switching Modes

  • Standard Chat Mode: Type your message and press Enter. The AI will respond immediately.
  • Search Mode: To fetch real-time data, type a request with the keyword "search" followed by your query, such as "search latest news."

📜 Troubleshooting

If you encounter any issues while installing or running LLM_Agent, try the following:

  • Ensure Docker is running before you execute the docker-compose up command.
  • Check your internet connection if the search feature is not working.
  • Review the console output in the terminal to identify any specific error messages.

💬 Support

If you need help or have questions about LLM_Agent, you can open an issue on the GitHub Issues page.

🏷️ Topics

  • ai-agent
  • chatbot
  • docker
  • fastapi
  • full-stack
  • gguf
  • llm
  • local-llm
  • nlp
  • python
  • quantization
  • qwen
  • search-agent

🔗 Additional Resources

For further information on Docker and FastAPI, consider checking out the following resources:

By following these steps, you can easily download, install, and start using LLM_Agent. Enjoy exploring the capabilities of your personal AI!