Use Cases

Explore how moltbot can be used in real workflows.

Automated Backups

Data

Description

Run backups on a schedule to reduce human error and ensure critical data is always protected.

Who is it for?

  • Sysadmins
  • DBAs
  • Ops engineers
  • Anyone who needs regular backups

Steps

  1. Choose source & destination
  2. Set a schedule
  3. Pick full/incremental strategy
  4. Define retention
  5. Enable and monitor

Example config

task:
  name: "数据备份"
  schedule: "0 2 * * *"  # 每天凌晨2点
  action:
    type: "backup"
    source: "/data/important"
    destination: "/backup/daily"
    compression: true
    retention: 30  # 保留30天

Expected outcome

  • Hands-off scheduled backups
  • Lower risk of data loss
  • Save time and effort
  • Traceable history

File Watching & Processing

Files

Description

Watch a folder for new or modified files and automatically convert, compress, or move them.

Who is it for?

  • Content admins
  • Workflow builders
  • Media teams
  • Automation users

Steps

  1. Set watch path
  2. Select events
  3. Define rules
  4. Choose post-actions
  5. Start the watcher

Example config

task:
  name: "文件监控处理"
  watch:
    path: "/uploads"
    recursive: true
    events: ["create", "modify"]
    filters:
      extensions: [".jpg", ".png", ".pdf"]
  action:
    type: "process"
    steps:
      - type: "convert"
        format: "webp"
      - type: "move"
        destination: "/processed"

Expected outcome

  • Real-time processing
  • Less manual work
  • Consistent outputs
  • Higher throughput

Scheduled Data Sync

Sync

Description

Keep systems in sync by periodically pulling from APIs or databases and writing to your destination.

Who is it for?

  • Developers
  • DevOps engineers
  • Analysts
  • Teams with multi-system data

Steps

  1. Configure source
  2. Configure destination
  3. Set interval
  4. Transform if needed
  5. Add retries
  6. Enable

Example config

task:
  name: "API数据同步"
  schedule: "*/15 * * * *"  # 每15分钟
  action:
    type: "sync"
    source:
      type: "api"
      url: "https://api.example.com/data"
      method: "GET"
      headers:
        Authorization: "Bearer ${API_TOKEN}"
    destination:
      type: "database"
      connection: "postgresql://localhost/db"
      table: "synced_data"

Expected outcome

  • Always-fresh data
  • Fewer inconsistencies
  • Better accuracy
  • Less operational toil

Monitoring & Alerts

Ops

Description

Check CPU/memory/disk on a schedule and send alerts (or take action) when thresholds are exceeded.

Who is it for?

  • Ops engineers
  • Admins
  • DevOps teams
  • Anyone monitoring servers

Steps

  1. Pick metrics
  2. Set thresholds
  3. Configure notifications
  4. Define actions (optional)
  5. Enable monitoring

Example config

task:
  name: "系统监控"
  schedule: "*/5 * * * *"  # 每5分钟检查
  action:
    type: "monitor"
    metrics:
      - name: "cpu_usage"
        threshold: 80
      - name: "memory_usage"
        threshold: 85
      - name: "disk_usage"
        threshold: 90
    alerts:
      - type: "email"
        recipients: ["admin@example.com"]
      - type: "webhook"
        url: "https://hooks.example.com/alert"

Expected outcome

  • Catch issues early
  • Automatic notifications
  • Less downtime
  • Improved stability

More ideas

📧

Email Automation

Send reports and notifications, process attachments, etc.

🔄

Workflow Automation

Automate complex workflows to improve efficiency.

📊

Report Generation

Generate and distribute reports on a schedule.

🧹

System Cleanup

Clean temp files and logs to free disk space.