docs(blog): add header in README

This commit is contained in:
Acbox
2026-02-16 18:49:40 +08:00
parent 3a2cf708ba
commit 09d7840a5f
3 changed files with 31 additions and 15 deletions
+3
View File
@@ -7,6 +7,7 @@
<img src="./assets/logo.png" alt="Memoh" width="100" height="100">
<h1>Memoh</h1>
<p>Multi-Member, Structured Long-Memory, Containerized AI Agent System.</p>
<p>📌 <a href="https://docs.memoh.ai/blogs/2026-02-16.html">Introduction to Memoh - The Case for an Always-On, Containerized Home Agent</a></p>
<div align="center">
<img src="https://img.shields.io/github/package-json/v/memohai/Memoh" alt="Version" />
<img src="https://img.shields.io/github/license/memohai/Memoh" alt="License" />
@@ -23,6 +24,8 @@
<hr>
</div>
Memoh is a AI agent system platform. Users can create their own AI bots and chat with them via Telegram, Discord, Lark(Feishu), etc. Every bot has independent container and memory system which allows them to edit files, execute commands and build themselves - Like [OpenClaw](https://openclaw.ai), Memoh provides a more secure, flexible and scalable solution for multi-bot management.
## Quick Start
+1
View File
@@ -7,6 +7,7 @@
<img src="./assets/logo.png" alt="Memoh" width="100" height="100">
<h1>Memoh</h1>
<p>多用户、结构化记忆、容器化的 AI Agent 系统。</p>
<p>📌 <a href="https://docs.memoh.ai/blogs/2026-02-16.html">Introduction to Memoh - The Case for an Always-On, Containerized Home Agent</a></p>
<div align="center">
<img src="https://img.shields.io/github/package-json/v/memohai/Memoh" alt="Version" />
<img src="https://img.shields.io/github/license/memohai/Memoh" alt="License" />
+27 -15
View File
@@ -6,14 +6,16 @@ author: Team Memoh
# Introduction to Memoh - The Case for an Always-On, Containerized Home Agent
## Overview
We enter 2026 with a familiar tension: models get smarter every quarter, but the “agent experience” still breaks on context, latency, privacy, and real-world workflows. Over the past year, We kept circling three questions:
- Where does the capability boundary of agents actually sit?
- Whats the real value of long context?
- What hardware form factor makes “always-on, personal AI” feel natural?
We enter 2026 with a familiar tension: models get smarter every quarter, but the “agent experience” still breaks on context, latency, privacy, and real-world workflows. Over the past year, we kept circling three questions:
- Where does the capability boundary of agents actually sit?
- Whats the real value of long context?
- What hardware form factor makes “always-on, personal AI” feel natural?
Memoh is our attempt to turn those questions into something buildable—not a manifesto, but a system that can survive contact with reality.
## Story Time
Time travels fast. Somewhere between “Ill remember this” and “wait, why did we decide that?”, a year disappears.
Thats the annoying part of building: most progress doesnt feel like progress while its happening. Its just a stream of small choices, half-finished threads, late-night fixes, and the occasional moment that actually clicks. The kind of moment where you sit back and think: okay—this is real.
@@ -30,30 +32,37 @@ Because the thing LLMs cant give you is not “intelligence.” Its weight
Thats when I realized what I wanted wasnt “an AI that can talk.” I wanted an AI that can live with you—quietly, continuously, accumulating context without turning your life into content sludge.
Phones were out first instinct—it's personal, powerful, always there. But mobile OS is closed: without OEM privileges you can build an app, not ambient infrastructure.
Phones were our first instinct—it's personal, powerful, always there. But mobile OS is closed: without OEM privileges you can build an app, not ambient infrastructure.
So We looked for the always-on node every home already has: the router (conceptually). Then the economics clash—router-class hardware cant carry memory, RAG, tools, and multi-user agents. The device evolves: more RAM/storage, a screen, mic/speaker, tiny battery for take out, portable form.
So we looked for the always-on node every home already has: the router (conceptually). Then the economics clash—router-class hardware cant carry memory, RAG, tools, and multi-user agents. The device evolves: more RAM/storage, a screen, mic/speaker, tiny battery for take out, portable form.
Eventually it stops being a router. It becomes a new category: a home agent base layer.
## What
Memoh is a containerized home/studio AI base layer: cloud-grade model capability paired with local-first memory (knowledge base, RAG/search, conversation history) that stays under your control.
## Why
Long-context models raise the ceiling for agents—but they also make “fully local” expensive and “fully cloud” uncomfortable. People dont want to re-brief AI every day, and they dont want their durable context trapped in someone elses feed. Containerization makes Memoh portable, reproducible, and safe to run as always-on infrastructure—so continuity becomes cheap, private, and dependable.
## How
We run Memoh as a containerized stack: isolated services for storage (files/DB/vector index), retrieval, tool/runtime execution, and the control plane. Inference calls cloud APIs when you need frontier capability; durable memory and indexing stay local. The device acts as an always-on node (router-like, not a router) serving multiple users with strict boundaries: sharing is explicit, private context remains private, and everything is deployable/upgradable as versioned containers.
## Features
- **Multi-bot Management**: Create multiple bots; humans and bots, or bots with each other, can chat privately, in groups, or collaborate.
![Multi-bot Management](/blogs/2026-02-16/01-multi-bots.png)
![Multi-bot Management](/blogs/2026-02-16/01-multi-bots.png)
- **Containerized**: Each bot runs in its own isolated container. Bots can freely execute commands, edit files, and access the network within their containers—like having their own computer.
![Containerized](/blogs/2026-02-16/02-containerized.png)
![Containerized](/blogs/2026-02-16/02-containerized.png)
- **Memory Engineering**: Every chat is stored in the database, with the last 24 hours of context loaded by default. Each conversation turn is stored as memory and can be retrieved by bots through semantic search.
![Memory Engineering](/blogs/2026-02-16/03-memory-engineering.png)
![Memory Engineering](/blogs/2026-02-16/03-memory-engineering.png)
- **Various Platforms**: Supports Telegram, Lark (Feishu), and more.
- **Simple and Easy to Use**: Configure bots and settings for Provider, Model, Memory, Channel, MCP, and Skills through a graphical interface—no coding required to set up your own AI bot.
@@ -61,18 +70,21 @@ We run Memoh as a containerized stack: isolated services for storage (files/DB/v
- More...
## Compare to OpenClaw
We Shared core belief: both Memoh and OpenClaw treat the agent as more than a chatbox—we give the LLM a playground: a real environment where it can remember, use tools, and iterate.
We share a core belief: both Memoh and OpenClaw treat the agent as more than a chatbox—we give the LLM a playground: a real environment where it can remember, use tools, and iterate.
Where Memoh differs:
- Lighter and Faster: built as home/studio infrastructure, can be held in the edge device
- Containerized by default: each bot gets an isolated container (files/commands/network/jobs).
- Hybrid split: cloud inference, local-first memory + indexing.
- Multi-user first: explicit sharing and privacy boundaries, support a2a (Agent2Agent).
- Sustainable: have an experienced team and confidence to push forward and build it.
- **Containerized by default**: each bot gets an isolated container (files/commands/network/jobs)
- **Hybrid split**: cloud inference, local-first memory + indexing
- **Multi-user first**: explicit sharing and privacy boundaries, support a2a (Agent2Agent)
- **Sustainable**: have an experienced team and confidence to push forward and build it
## Conclusion
Memoh is built for one thing: always-on continuity—an AI that stays online, and a memory that stays yours.
We keep frontier inference in the cloud, keep durable context local, and run everything as a containerized, always-on stack. If you want an agent that feels less like an app and more like home infrastructure, thats the bet Memoh is making.
Furthermore, we will continue to operate and permanently open source it, permanently open-source Memoh, making it a product with long impact.
Furthermore, we will continue to operate and permanently open-source Memoh, making it a product with long impact.