v0.21.1-fastapi

This commit is contained in:
2025-11-04 16:06:36 +08:00
parent 3e58c3d0e9
commit d57b5d76ae
218 changed files with 19617 additions and 72339 deletions

View File

@@ -37,9 +37,12 @@ OPENSEARCH_PASSWORD=infini_rag_flow_OS_01
# The port used to expose the Kibana service to the host machine,
# allowing EXTERNAL access to the service running inside the Docker container.
# To enable kibana, you need to:
# 1. Ensure that COMPOSE_PROFILES includes kibana, for example: COMPOSE_PROFILES=${DOC_ENGINE},kibana
# 2. Comment out or delete the following configurations of the es service in docker-compose-base.yml: xpack.security.enabled、xpack.security.http.ssl.enabled、xpack.security.transport.ssl.enabled (for details: https://www.elastic.co/docs/deploy-manage/security/self-auto-setup#stack-existing-settings-detected)
# 3. Adjust the es.hosts in conf/service_config.yaml or docker/service_conf.yaml.template to 'https://localhost:1200'
# 4. After the startup is successful, in the es container, execute the command to generate the kibana token: `bin/elasticsearch-create-enrollment-token -s kibana`, then you can use kibana normally
KIBANA_PORT=6601
KIBANA_USER=rag_flow
KIBANA_PASSWORD=infini_rag_flow
# The maximum amount of the memory, in bytes, that a specific Docker container can use while running.
# Update it according to the available memory in the host machine.
@@ -91,15 +94,16 @@ REDIS_PASSWORD=infini_rag_flow
# The port used to expose RAGFlow's HTTP API service to the host machine,
# allowing EXTERNAL access to the service running inside the Docker container.
SVR_HTTP_PORT=9380
ADMIN_SVR_HTTP_PORT=9381
# The RAGFlow Docker image to download.
# Defaults to the v0.20.5-slim edition, which is the RAGFlow Docker image without embedding models.
RAGFLOW_IMAGE=infiniflow/ragflow:fastapi
# Defaults to the v0.21.1-slim edition, which is the RAGFlow Docker image without embedding models.
RAGFLOW_IMAGE=infiniflow/ragflow:v0.21.1-slim
#
# To download the RAGFlow Docker image with embedding models, uncomment the following line instead:
# RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.5
# RAGFLOW_IMAGE=infiniflow/ragflow:v0.21.1
#
# The Docker image of the v0.20.5 edition includes built-in embedding models:
# The Docker image of the v0.21.1 edition includes built-in embedding models:
# - BAAI/bge-large-zh-v1.5
# - maidalun1020/bce-embedding-base_v1
#
@@ -192,12 +196,8 @@ REGISTER_ENABLED=1
# - For OpenSearch:
# COMPOSE_PROFILES=opensearch,sandbox
POSTGRES_DBNAME=rag_flow
POSTGRES_USER=rag_flow
POSTGRES_PASSWORD=infini_rag_flow
POSTGRES_PORT=5432
DB_TYPE=postgres
USE_OCR_HTTP=true
DB_TYPE=postgres

View File

@@ -1,85 +1,269 @@
# RAGFlow Docker 服务管理
# README
## 问题解决
<details open>
<summary></b>📗 Table of Contents</b></summary>
原来的配置每次启动 `docker-compose.yml` 都会重新创建 `docker-compose-base.yml` 中的服务,现在已修改为只启动 ragflow 服务。
- 🐳 [Docker Compose](#-docker-compose)
- 🐬 [Docker environment variables](#-docker-environment-variables)
- 🐋 [Service configuration](#-service-configuration)
- 📋 [Setup Examples](#-setup-examples)
## Docker Compose 网络命名说明
</details>
Docker Compose 会自动为网络名称添加项目前缀:
- **项目名** + **网络名** = 最终网络名称
- 默认项目名通常是目录名:`ragflow-20250916`
- 最终网络名:`ragflow-20250916_ragflow`
## 🐳 Docker Compose
## 修改内容
- **docker-compose.yml**
Sets up environment for RAGFlow and its dependencies.
- **docker-compose-base.yml**
Sets up environment for RAGFlow's dependencies: Elasticsearch/[Infinity](https://github.com/infiniflow/infinity), MySQL, MinIO, and Redis.
1. **移除了 `include` 指令**:不再包含 `docker-compose-base.yml`
2. **使用外部网络**ragflow 服务连接到由 `docker-compose-base.yml` 创建的 `ragflow-20250916_ragflow` 网络
3. **移除了 `depends_on`**:不再依赖 postgres 健康检查
4. **网络配置**
- `docker-compose-base.yml` 创建名为 `ragflow-20250916_ragflow` 的网络
- `docker-compose.yml` 使用 `external: true` 连接到已存在的网络
5. **使用项目名**:通过 `-p ragflow` 参数统一项目名
> [!CAUTION]
> We do not actively maintain **docker-compose-CN-oc9.yml**, **docker-compose-gpu-CN-oc9.yml**, or **docker-compose-gpu.yml**, so use them at your own risk. However, you are welcome to file a pull request to improve any of them.
## 使用方法
## 🐬 Docker environment variables
### 首次使用(初始化)
```bash
# 1. 启动基础服务(创建网络)
docker-compose -p ragflow -f docker-compose-base.yml up -d
The [.env](./.env) file contains important environment variables for Docker.
# 2. 启动 ragflow 服务
docker-compose -p ragflow -f docker-compose.yml up -d ragflow
```
### Elasticsearch
### 日常使用(只启动 ragflow
```bash
# 使用脚本(推荐)
./start-ragflow.sh
- `STACK_VERSION`
The version of Elasticsearch. Defaults to `8.11.3`
- `ES_PORT`
The port used to expose the Elasticsearch service to the host machine, allowing **external** access to the service running inside the Docker container. Defaults to `1200`.
- `ELASTIC_PASSWORD`
The password for Elasticsearch.
# 或手动启动
docker-compose -p ragflow -f docker-compose.yml up -d ragflow
```
### Kibana
### 使用 ragflow.sh完整管理
```bash
# 启动 RAGFlow 服务(不重新创建基础服务)
./ragflow.sh start
- `KIBANA_PORT`
The port used to expose the Kibana service to the host machine, allowing **external** access to the service running inside the Docker container. Defaults to `6601`.
- `KIBANA_USER`
The username for Kibana. Defaults to `rag_flow`.
- `KIBANA_PASSWORD`
The password for Kibana. Defaults to `infini_rag_flow`.
# 停止 RAGFlow 服务(保留基础服务)
./ragflow.sh stop
### Resource management
# 重启 RAGFlow 服务
./ragflow.sh restart
- `MEM_LIMIT`
The maximum amount of the memory, in bytes, that *a specific* Docker container can use while running. Defaults to `8073741824`.
# 查看服务状态
./ragflow.sh status
### MySQL
# 查看日志
./ragflow.sh logs
```
- `MYSQL_PASSWORD`
The password for MySQL.
- `MYSQL_PORT`
The port used to expose the MySQL service to the host machine, allowing **external** access to the MySQL database running inside the Docker container. Defaults to `5455`.
### 手动操作
```bash
# 只启动基础服务
docker-compose -f docker-compose-base.yml up -d
### MinIO
# 只启动 ragflow 服务
docker-compose -f docker-compose.yml up -d ragflow
- `MINIO_CONSOLE_PORT`
The port used to expose the MinIO console interface to the host machine, allowing **external** access to the web-based console running inside the Docker container. Defaults to `9001`
- `MINIO_PORT`
The port used to expose the MinIO API service to the host machine, allowing **external** access to the MinIO object storage service running inside the Docker container. Defaults to `9000`.
- `MINIO_USER`
The username for MinIO.
- `MINIO_PASSWORD`
The password for MinIO.
# 停止 ragflow 服务
docker-compose -f docker-compose.yml down
```
### Redis
## 服务说明
- `REDIS_PORT`
The port used to expose the Redis service to the host machine, allowing **external** access to the Redis service running inside the Docker container. Defaults to `6379`.
- `REDIS_PASSWORD`
The password for Redis.
- **基础服务**postgres、redis、minio、opensearch
- **应用服务**ragflow-server
- **网络**ragflow外部网络
### RAGFlow
## 优势
- `SVR_HTTP_PORT`
The port used to expose RAGFlow's HTTP API service to the host machine, allowing **external** access to the service running inside the Docker container. Defaults to `9380`.
- `RAGFLOW-IMAGE`
The Docker image edition. Available editions:
- `infiniflow/ragflow:v0.21.1-slim` (default): The RAGFlow Docker image without embedding models.
- `infiniflow/ragflow:v0.21.1`: The RAGFlow Docker image with embedding models including:
- Built-in embedding models:
- `BAAI/bge-large-zh-v1.5`
- `maidalun1020/bce-embedding-base_v1`
1. **快速启动**:只启动需要的服务
2. **数据持久**:基础服务数据不会丢失
3. **灵活管理**:可以独立管理各个服务
4. **资源节约**:避免不必要的服务重建
> [!TIP]
> If you cannot download the RAGFlow Docker image, try the following mirrors.
>
> - For the `nightly-slim` edition:
> - `RAGFLOW_IMAGE=swr.cn-north-4.myhuaweicloud.com/infiniflow/ragflow:nightly-slim` or,
> - `RAGFLOW_IMAGE=registry.cn-hangzhou.aliyuncs.com/infiniflow/ragflow:nightly-slim`.
> - For the `nightly` edition:
> - `RAGFLOW_IMAGE=swr.cn-north-4.myhuaweicloud.com/infiniflow/ragflow:nightly` or,
> - `RAGFLOW_IMAGE=registry.cn-hangzhou.aliyuncs.com/infiniflow/ragflow:nightly`.
### Timezone
- `TIMEZONE`
The local time zone. Defaults to `'Asia/Shanghai'`.
### Hugging Face mirror site
- `HF_ENDPOINT`
The mirror site for huggingface.co. It is disabled by default. You can uncomment this line if you have limited access to the primary Hugging Face domain.
### MacOS
- `MACOS`
Optimizations for macOS. It is disabled by default. You can uncomment this line if your OS is macOS.
### Maximum file size
- `MAX_CONTENT_LENGTH`
The maximum file size for each uploaded file, in bytes. You can uncomment this line if you wish to change the 128M file size limit. After making the change, ensure you update `client_max_body_size` in nginx/nginx.conf correspondingly.
### Doc bulk size
- `DOC_BULK_SIZE`
The number of document chunks processed in a single batch during document parsing. Defaults to `4`.
### Embedding batch size
- `EMBEDDING_BATCH_SIZE`
The number of text chunks processed in a single batch during embedding vectorization. Defaults to `16`.
## 🐋 Service configuration
[service_conf.yaml](./service_conf.yaml) specifies the system-level configuration for RAGFlow and is used by its API server and task executor. In a dockerized setup, this file is automatically created based on the [service_conf.yaml.template](./service_conf.yaml.template) file (replacing all environment variables by their values).
- `ragflow`
- `host`: The API server's IP address inside the Docker container. Defaults to `0.0.0.0`.
- `port`: The API server's serving port inside the Docker container. Defaults to `9380`.
- `mysql`
- `name`: The MySQL database name. Defaults to `rag_flow`.
- `user`: The username for MySQL.
- `password`: The password for MySQL.
- `port`: The MySQL serving port inside the Docker container. Defaults to `3306`.
- `max_connections`: The maximum number of concurrent connections to the MySQL database. Defaults to `100`.
- `stale_timeout`: Timeout in seconds.
- `minio`
- `user`: The username for MinIO.
- `password`: The password for MinIO.
- `host`: The MinIO serving IP *and* port inside the Docker container. Defaults to `minio:9000`.
- `oss`
- `access_key`: The access key ID used to authenticate requests to the OSS service.
- `secret_key`: The secret access key used to authenticate requests to the OSS service.
- `endpoint_url`: The URL of the OSS service endpoint.
- `region`: The OSS region where the bucket is located.
- `bucket`: The name of the OSS bucket where files will be stored. When you want to store all files in a specified bucket, you need this configuration item.
- `prefix_path`: Optional. A prefix path to prepend to file names in the OSS bucket, which can help organize files within the bucket.
- `s3`:
- `access_key`: The access key ID used to authenticate requests to the S3 service.
- `secret_key`: The secret access key used to authenticate requests to the S3 service.
- `endpoint_url`: The URL of the S3-compatible service endpoint. This is necessary when using an S3-compatible protocol instead of the default AWS S3 endpoint.
- `bucket`: The name of the S3 bucket where files will be stored. When you want to store all files in a specified bucket, you need this configuration item.
- `region`: The AWS region where the S3 bucket is located. This is important for directing requests to the correct data center.
- `signature_version`: Optional. The version of the signature to use for authenticating requests. Common versions include `v4`.
- `addressing_style`: Optional. The style of addressing to use for the S3 endpoint. This can be `path` or `virtual`.
- `prefix_path`: Optional. A prefix path to prepend to file names in the S3 bucket, which can help organize files within the bucket.
- `oauth`
The OAuth configuration for signing up or signing in to RAGFlow using a third-party account.
- `<channel>`: Custom channel ID.
- `type`: Authentication type, options include `oauth2`, `oidc`, `github`. Default is `oauth2`, when `issuer` parameter is provided, defaults to `oidc`.
- `icon`: Icon ID, options include `github`, `sso`, default is `sso`.
- `display_name`: Channel name, defaults to the Title Case format of the channel ID.
- `client_id`: Required, unique identifier assigned to the client application.
- `client_secret`: Required, secret key for the client application, used for communication with the authentication server.
- `authorization_url`: Base URL for obtaining user authorization.
- `token_url`: URL for exchanging authorization code and obtaining access token.
- `userinfo_url`: URL for obtaining user information (username, email, etc.).
- `issuer`: Base URL of the identity provider. OIDC clients can dynamically obtain the identity provider's metadata (`authorization_url`, `token_url`, `userinfo_url`) through `issuer`.
- `scope`: Requested permission scope, a space-separated string. For example, `openid profile email`.
- `redirect_uri`: Required, URI to which the authorization server redirects during the authentication flow to return results. Must match the callback URI registered with the authentication server. Format: `https://your-app.com/v1/user/oauth/callback/<channel>`. For local configuration, you can directly use `http://127.0.0.1:80/v1/user/oauth/callback/<channel>`.
- `user_default_llm`
The default LLM to use for a new RAGFlow user. It is disabled by default. To enable this feature, uncomment the corresponding lines in **service_conf.yaml.template**.
- `factory`: The LLM supplier. Available options:
- `"OpenAI"`
- `"DeepSeek"`
- `"Moonshot"`
- `"Tongyi-Qianwen"`
- `"VolcEngine"`
- `"ZHIPU-AI"`
- `api_key`: The API key for the specified LLM. You will need to apply for your model API key online.
> [!TIP]
> If you do not set the default LLM here, configure the default LLM on the **Settings** page in the RAGFlow UI.
## 📋 Setup Examples
### 🔒 HTTPS Setup
#### Prerequisites
- A registered domain name pointing to your server
- Port 80 and 443 open on your server
- Docker and Docker Compose installed
#### Getting and configuring certificates (Let's Encrypt)
If you want your instance to be available under `https`, follow these steps:
1. **Install Certbot and obtain certificates**
```bash
# Ubuntu/Debian
sudo apt update && sudo apt install certbot
# CentOS/RHEL
sudo yum install certbot
# Obtain certificates (replace with your actual domain)
sudo certbot certonly --standalone -d your-ragflow-domain.com
```
2. **Locate your certificates**
Once generated, your certificates will be located at:
- Certificate: `/etc/letsencrypt/live/your-ragflow-domain.com/fullchain.pem`
- Private key: `/etc/letsencrypt/live/your-ragflow-domain.com/privkey.pem`
3. **Update docker-compose.yml**
Add the certificate volumes to the `ragflow` service in your `docker-compose.yml`:
```yaml
services:
ragflow:
# ...existing configuration...
volumes:
# SSL certificates
- /etc/letsencrypt/live/your-ragflow-domain.com/fullchain.pem:/etc/nginx/ssl/fullchain.pem:ro
- /etc/letsencrypt/live/your-ragflow-domain.com/privkey.pem:/etc/nginx/ssl/privkey.pem:ro
# Switch to HTTPS nginx configuration
- ./nginx/ragflow.https.conf:/etc/nginx/conf.d/ragflow.conf
# ...other existing volumes...
```
4. **Update nginx configuration**
Edit `nginx/ragflow.https.conf` and replace `my_ragflow_domain.com` with your actual domain name.
5. **Restart the services**
```bash
docker-compose down
docker-compose up -d
```
> [!IMPORTANT]
> - Ensure your domain's DNS A record points to your server's IP address
> - Stop any services running on ports 80/443 before obtaining certificates with `--standalone`
> [!TIP]
> For development or testing, you can use self-signed certificates, but browsers will show security warnings.
#### Alternative: Using existing certificates
If you already have SSL certificates from another provider:
1. Place your certificates in a directory accessible to Docker
2. Update the volume paths in `docker-compose.yml` to point to your certificate files
3. Ensure the certificate file contains the full certificate chain
4. Follow steps 4-5 from the Let's Encrypt guide above

View File

@@ -1,7 +1,44 @@
services:
es01:
container_name: ragflow-es-01
profiles:
- elasticsearch
image: elasticsearch:${STACK_VERSION}
volumes:
- esdata01:/usr/share/elasticsearch/data
ports:
- ${ES_PORT}:9200
env_file: .env
environment:
- node.name=es01
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
- bootstrap.memory_lock=false
- discovery.type=single-node
- xpack.security.enabled=true
- xpack.security.http.ssl.enabled=false
- xpack.security.transport.ssl.enabled=false
- cluster.routing.allocation.disk.watermark.low=5gb
- cluster.routing.allocation.disk.watermark.high=3gb
- cluster.routing.allocation.disk.watermark.flood_stage=2gb
- TZ=${TIMEZONE}
mem_limit: ${MEM_LIMIT}
ulimits:
memlock:
soft: -1
hard: -1
healthcheck:
test: ["CMD-SHELL", "curl http://localhost:9200"]
interval: 10s
timeout: 10s
retries: 120
networks:
- ragflow
restart: on-failure
opensearch01:
container_name: ragflow-opensearch-01
profiles:
- opensearch
image: hub.icert.top/opensearchproject/opensearch:2.19.1
volumes:
- osdata01:/usr/share/opensearch/data
@@ -22,7 +59,6 @@ services:
- cluster.routing.allocation.disk.watermark.flood_stage=2gb
- TZ=${TIMEZONE}
- http.port=9201
- OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m
mem_limit: ${MEM_LIMIT}
ulimits:
memlock:
@@ -37,29 +73,96 @@ services:
- ragflow
restart: on-failure
postgres:
image: postgres:15
container_name: ragflow-postgres
infinity:
container_name: ragflow-infinity
profiles:
- infinity
image: infiniflow/infinity:v0.6.1
volumes:
- infinity_data:/var/infinity
- ./infinity_conf.toml:/infinity_conf.toml
command: ["-f", "/infinity_conf.toml"]
ports:
- ${INFINITY_THRIFT_PORT}:23817
- ${INFINITY_HTTP_PORT}:23820
- ${INFINITY_PSQL_PORT}:5432
env_file: .env
environment:
- POSTGRES_DB=${POSTGRES_DBNAME}
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- TZ=${TIMEZONE}
ports:
- ${POSTGRES_PORT-5440}:5432
volumes:
- postgres_data:/var/lib/postgresql/data
mem_limit: ${MEM_LIMIT}
ulimits:
nofile:
soft: 500000
hard: 500000
networks:
- ragflow
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DBNAME}"]
test: ["CMD", "curl", "http://localhost:23820/admin/node/current"]
interval: 10s
timeout: 10s
retries: 120
restart: on-failure
sandbox-executor-manager:
container_name: ragflow-sandbox-executor-manager
profiles:
- sandbox
image: ${SANDBOX_EXECUTOR_MANAGER_IMAGE-infiniflow/sandbox-executor-manager:latest}
privileged: true
ports:
- ${SANDBOX_EXECUTOR_MANAGER_PORT-9385}:9385
env_file: .env
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- ragflow
security_opt:
- no-new-privileges:true
environment:
- TZ=${TIMEZONE}
- SANDBOX_EXECUTOR_MANAGER_POOL_SIZE=${SANDBOX_EXECUTOR_MANAGER_POOL_SIZE:-3}
- SANDBOX_BASE_PYTHON_IMAGE=${SANDBOX_BASE_PYTHON_IMAGE:-infiniflow/sandbox-base-python:latest}
- SANDBOX_BASE_NODEJS_IMAGE=${SANDBOX_BASE_NODEJS_IMAGE:-infiniflow/sandbox-base-nodejs:latest}
- SANDBOX_ENABLE_SECCOMP=${SANDBOX_ENABLE_SECCOMP:-false}
- SANDBOX_MAX_MEMORY=${SANDBOX_MAX_MEMORY:-256m}
- SANDBOX_TIMEOUT=${SANDBOX_TIMEOUT:-10s}
healthcheck:
test: ["CMD", "curl", "http://localhost:9385/healthz"]
interval: 10s
timeout: 5s
retries: 5
restart: on-failure
mysql:
# mysql:5.7 linux/arm64 image is unavailable.
image: mysql:8.0.39
container_name: ragflow-mysql
env_file: .env
environment:
- MYSQL_ROOT_PASSWORD=${MYSQL_PASSWORD}
- TZ=${TIMEZONE}
command:
--max_connections=1000
--character-set-server=utf8mb4
--collation-server=utf8mb4_unicode_ci
--default-authentication-plugin=mysql_native_password
--tls_version="TLSv1.2,TLSv1.3"
--init-file /data/application/init.sql
--binlog_expire_logs_seconds=604800
ports:
- ${MYSQL_PORT}:3306
volumes:
- mysql_data:/var/lib/mysql
- ./init.sql:/data/application/init.sql
networks:
- ragflow
healthcheck:
test: ["CMD", "mysqladmin" ,"ping", "-uroot", "-p${MYSQL_PASSWORD}"]
interval: 10s
timeout: 10s
retries: 3
restart: on-failure
minio:
image: quay.io/minio/minio:RELEASE.2025-06-13T11-33-47Z
container_name: ragflow-minio
@@ -85,7 +188,7 @@ services:
redis:
# swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/valkey/valkey:8
image: valkey/valkey
image: valkey/valkey:8
container_name: ragflow-redis
command: redis-server --requirepass ${REDIS_PASSWORD} --maxmemory 128mb --maxmemory-policy allkeys-lru
env_file: .env
@@ -104,24 +207,47 @@ services:
start_period: 10s
kibana:
container_name: ragflow-kibana
profiles:
- kibana
image: kibana:${STACK_VERSION}
ports:
- ${KIBANA_PORT-5601}:5601
env_file: .env
environment:
- TZ=${TIMEZONE}
volumes:
- kibana_data:/usr/share/kibana/data
depends_on:
es01:
condition: service_started
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:5601/api/status"]
interval: 10s
timeout: 10s
retries: 120
networks:
- ragflow
restart: on-failure
volumes:
esdata01:
name: ragflow_esdata01
driver: local
osdata01:
name: ragflow_osdata01
driver: local
infinity_data:
name: ragflow_infinity_data
driver: local
mysql_data:
name: ragflow_mysql_data
driver: local
minio_data:
name: ragflow_minio_data
driver: local
redis_data:
name: ragflow_redis_data
postgres_data:
name: ragflow_postgres_data
driver: local
kibana_data:
driver: local
networks:
ragflow:
name: ragflow-20250916_ragflow
driver: bridge

View File

@@ -1,6 +1,11 @@
include:
- ./docker-compose-base.yml
# To ensure that the container processes the locally modified `service_conf.yaml.template` instead of the one included in its image, you need to mount the local `service_conf.yaml.template` to the container.
services:
ragflow:
depends_on:
mysql:
condition: service_healthy
image: ${RAGFLOW_IMAGE}
# Example configuration to set up an MCP server:
# command:
@@ -17,14 +22,19 @@ services:
# - --no-transport-sse-enabled # Disable legacy SSE endpoints (/sse and /messages/)
# - --no-transport-streamable-http-enabled # Disable Streamable HTTP transport (/mcp endpoint)
# - --no-json-response # Disable JSON response mode in Streamable HTTP transport (instead of SSE over HTTP)
# Example configration to start Admin server:
# command:
# - --enable-adminserver
container_name: ragflow-server
ports:
- ${SVR_HTTP_PORT}:9380
- 8000:80
- 8443:443
- 15678:5678
- 15679:5679
- 19382:9382 # entry for MCP (host_port:docker_port). The docker_port must match the value you set for `mcp-port` above.
- ${ADMIN_SVR_HTTP_PORT}:9381
- 80:80
- 443:443
- 5678:5678
- 5679:5679
- 9382:9382 # entry for MCP (host_port:docker_port). The docker_port must match the value you set for `mcp-port` above.
volumes:
- ./ragflow-logs:/ragflow/logs
- ./nginx/ragflow.conf:/etc/nginx/conf.d/ragflow.conf
@@ -38,7 +48,6 @@ services:
- TZ=${TIMEZONE}
- HF_ENDPOINT=${HF_ENDPOINT-}
- MACOS=${MACOS-}
- DB_TYPE=postgres
networks:
- ragflow
restart: on-failure
@@ -46,8 +55,25 @@ services:
# If you use Docker Desktop, the --add-host flag is optional. This flag ensures that the host's internal IP is exposed to the Prometheus container.
extra_hosts:
- "host.docker.internal:host-gateway"
networks:
ragflow:
name: ragflow-20250916_ragflow
external: true
# executor:
# depends_on:
# mysql:
# condition: service_healthy
# image: ${RAGFLOW_IMAGE}
# container_name: ragflow-executor
# volumes:
# - ./ragflow-logs:/ragflow/logs
# - ./nginx/ragflow.conf:/etc/nginx/conf.d/ragflow.conf
# env_file: .env
# environment:
# - TZ=${TIMEZONE}
# - HF_ENDPOINT=${HF_ENDPOINT}
# - MACOS=${MACOS}
# entrypoint: "/ragflow/entrypoint_task_executor.sh 1 3"
# networks:
# - ragflow
# restart: on-failure
# # https://docs.docker.com/engine/daemon/prometheus/#create-a-prometheus-configuration
# # If you're using Docker Desktop, the --add-host flag is optional. This flag makes sure that the host's internal IP gets exposed to the Prometheus container.
# extra_hosts:
# - "host.docker.internal:host-gateway"

View File

@@ -11,6 +11,7 @@ function usage() {
echo " --disable-webserver Disables the web server (nginx + ragflow_server)."
echo " --disable-taskexecutor Disables task executor workers."
echo " --enable-mcpserver Enables the MCP server."
echo " --enable-adminserver Enables the Admin server."
echo " --consumer-no-beg=<num> Start range for consumers (if using range-based)."
echo " --consumer-no-end=<num> End range for consumers (if using range-based)."
echo " --workers=<num> Number of task executors to run (if range is not used)."
@@ -21,12 +22,14 @@ function usage() {
echo " $0 --disable-webserver --consumer-no-beg=0 --consumer-no-end=5"
echo " $0 --disable-webserver --workers=2 --host-id=myhost123"
echo " $0 --enable-mcpserver"
echo " $0 --enable-adminserver"
exit 1
}
ENABLE_WEBSERVER=1 # Default to enable web server
ENABLE_TASKEXECUTOR=1 # Default to enable task executor
ENABLE_MCP_SERVER=0
ENABLE_ADMIN_SERVER=0 # Default close admin server
CONSUMER_NO_BEG=0
CONSUMER_NO_END=0
WORKERS=1
@@ -70,6 +73,10 @@ for arg in "$@"; do
ENABLE_MCP_SERVER=1
shift
;;
--enable-adminserver)
ENABLE_ADMIN_SERVER=1
shift
;;
--mcp-host=*)
MCP_HOST="${arg#*=}"
shift
@@ -155,8 +162,6 @@ function task_exe() {
while true; do
LD_PRELOAD="$JEMALLOC_PATH" \
"$PY" rag/svr/task_executor.py "${host_id}_${consumer_id}"
echo "task_executor exited. Sleeping 5s before restart."
sleep 5
done
}
@@ -183,10 +188,16 @@ if [[ "${ENABLE_WEBSERVER}" -eq 1 ]]; then
echo "Starting ragflow_server..."
while true; do
"$PY" api/ragflow_server_fastapi.py
"$PY" api/ragflow_server.py
done &
fi
if [[ "${ENABLE_ADMIN_SERVER}" -eq 1 ]]; then
echo "Starting admin_server..."
while true; do
"$PY" admin/server/admin_server.py
done &
fi
if [[ "${ENABLE_MCP_SERVER}" -eq 1 ]]; then
start_mcp_server

View File

@@ -1,5 +1,5 @@
[general]
version = "0.6.0"
version = "0.6.1"
time_zone = "utc-8"
[network]

View File

@@ -1,63 +0,0 @@
#!/bin/bash
# RAGFlow 服务管理脚本
case "$1" in
"start")
echo "启动 RAGFlow 服务..."
./start-ragflow.sh
;;
"stop")
echo "停止 RAGFlow 服务..."
./stop-ragflow.sh
;;
"restart")
echo "重启 RAGFlow 服务..."
./stop-ragflow.sh
sleep 5
./start-ragflow.sh
;;
"status")
echo "检查服务状态..."
echo "=== RAGFlow 服务 ==="
docker ps --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}" | grep -E "(ragflow-server|ragflow-postgres|ragflow-redis|ragflow-minio|ragflow-opensearch)"
;;
"logs")
echo "查看 RAGFlow 日志..."
docker-compose -f docker-compose.yml logs -f ragflow
;;
"init")
echo "初始化所有服务(仅首次使用)..."
docker-compose -f docker-compose-base.yml up -d
echo "等待基础服务启动..."
sleep 30
docker-compose -f docker-compose.yml up -d ragflow
echo "所有服务启动完成!"
;;
"clean")
echo "清理所有服务(包括数据)..."
read -p "确定要删除所有数据吗?(y/N): " -n 1 -r
echo
if [[ $REPLY =~ ^[Yy]$ ]]; then
docker-compose -f docker-compose.yml down
docker-compose -f docker-compose-base.yml down -v
docker network rm ragflow 2>/dev/null || true
echo "所有服务已清理"
else
echo "操作已取消"
fi
;;
*)
echo "用法: $0 {start|stop|restart|status|logs|init|clean}"
echo ""
echo "命令说明:"
echo " start - 启动 RAGFlow 服务(不重新创建基础服务)"
echo " stop - 停止 RAGFlow 服务(保留基础服务)"
echo " restart - 重启 RAGFlow 服务"
echo " status - 查看服务状态"
echo " logs - 查看 RAGFlow 日志"
echo " init - 初始化所有服务(仅首次使用)"
echo " clean - 清理所有服务(包括数据)"
exit 1
;;
esac

View File

@@ -32,14 +32,14 @@ redis:
db: 1
password: '${REDIS_PASSWORD:-infini_rag_flow}'
host: '${REDIS_HOST:-redis}:6379'
postgres:
name: '${POSTGRES_DBNAME:-rag_flow}'
user: '${POSTGRES_USER:-rag_flow}'
password: '${POSTGRES_PASSWORD:-infini_rag_flow}'
host: '${POSTGRES_HOST:-postgres}'
port: 5432
max_connections: 100
stale_timeout: 30
# postgres:
# name: '${POSTGRES_DBNAME:-rag_flow}'
# user: '${POSTGRES_USER:-rag_flow}'
# password: '${POSTGRES_PASSWORD:-infini_rag_flow}'
# host: '${POSTGRES_HOST:-postgres}'
# port: 5432
# max_connections: 100
# stale_timeout: 30
# s3:
# access_key: 'access_key'
# secret_key: 'secret_key'

View File

@@ -1,27 +0,0 @@
#!/bin/bash
# 启动脚本:只启动 ragflow 服务,不重新创建基础服务
echo "检查基础服务是否运行..."
# 检查基础服务是否在运行
if ! docker ps --format "table {{.Names}}" | grep -q "ragflow-postgres\|ragflow-redis\|ragflow-minio\|ragflow-opensearch-01"; then
echo "基础服务未运行,正在启动基础服务..."
docker-compose -p ragflow -f docker-compose-base.yml up -d
echo "等待基础服务启动完成..."
sleep 30
else
echo "基础服务已在运行"
fi
# 检查网络是否存在
if ! docker network ls --format "{{.Name}}" | grep -q "ragflow-20250916_ragflow"; then
echo "ragflow 网络不存在,请先运行基础服务创建网络"
exit 1
fi
echo "启动 ragflow 服务..."
docker compose -p ragflow -f docker-compose.yml up -d ragflow
echo "ragflow 服务启动完成!"
echo "访问地址: http://localhost:${SVR_HTTP_PORT:-9380}"

View File

@@ -1,12 +0,0 @@
#!/bin/bash
# 停止脚本:只停止 ragflow 服务,保留基础服务
echo "停止 ragflow 服务..."
docker compose -p ragflow -f docker-compose.yml down
# 等待一段时间确保完全停止
sleep 10
echo "ragflow 服务已停止"
echo "基础服务postgres、redis、minio、opensearch仍在运行"