添加长期存储,流式检查
Some checks failed
构建并部署 AI Agent 服务 / deploy (push) Has been cancelled

This commit is contained in:
2026-04-17 01:26:05 +08:00
parent 602d551fd1
commit 404efde282
37 changed files with 794 additions and 2095 deletions

12
.env
View File

@@ -14,30 +14,30 @@ LLAMACPP_API_KEY=token-abc123
# llama.cpp 服务配置
# -----------------------------------------------------------------------------
# 主 LLM 服务 (Gemma-4-E2B GGUF) - 端口 8081
VLLM_BASE_URL=http://localhost:8081/v1
VLLM_BASE_URL=http://127.0.0.1:8081/v1
# Embedding 服务 (embeddinggemma-300M GGUF) - 端口 8082
VLLM_EMBEDDING_URL=http://localhost:8082/v1
LLAMACPP_EMBEDDING_URL=http://127.0.0.1:8082/v1
# -----------------------------------------------------------------------------
# Mem0 记忆层配置
# -----------------------------------------------------------------------------
# ⭐ 注意Mem0 现在直接复用主 LLM 实例,无需单独配置
# Qdrant 向量数据库地址(远程服务器
# Qdrant 向量数据库地址(重点:统一使用远程源
QDRANT_URL=http://115.190.121.151:6333
QDRANT_COLLECTION_NAME=mem0_user_memories
# -----------------------------------------------------------------------------
# 数据库配置
# -----------------------------------------------------------------------------
# PostgreSQL 连接字符串(远程服务器
DB_URI=postgresql://postgres:mysecretpassword@115.190.121.151:5432/langgraph_db?sslmode=disable
# PostgreSQL 连接字符串(重点:统一使用远程源
DB_URI=postgresql://postgres:huang1998@115.190.121.151:5432/langgraph_db?sslmode=disable
# -----------------------------------------------------------------------------
# 前端配置
# -----------------------------------------------------------------------------
# 后端 API 地址(本地开发使用 8083 端口,避免与 llama.cpp 冲突)
API_URL=http://localhost:8083/chat
API_URL=http://127.0.0.1:8083/chat
# -----------------------------------------------------------------------------
# 应用行为配置