docs(blog): Introduction (Feb 2026) (#52)

* docs(blog): add 2026-02-16

* docs(blog): update contents

* docs: add blogs section

* docs(blog): update contents for diff

* docs(blog): add images

---------

Co-authored-by: Ran <16112591+chen-ran@users.noreply.github.com>
This commit is contained in:
Acbox Liu
2026-02-16 18:38:19 +08:00
committed by GitHub
parent b8a6a85fbb
commit 3a2cf708ba
9 changed files with 234 additions and 128 deletions
+12
View File
@@ -0,0 +1,12 @@
export const blogs = [
{
text: 'Blogs',
link: '/blogs/index.md',
items: [
{
text: 'Introduction (Feb 2026)',
link: '/blogs/2026-02-16.md'
}
]
}
]
+11 -128
View File
@@ -1,4 +1,7 @@
import { defineConfig } from 'vitepress'
import { blogs } from './blogs'
import { en } from './en'
import { zh } from './zh'
// https://vitepress.vuejs.org/config/app-configs
export default defineConfig({
@@ -25,136 +28,16 @@ export default defineConfig({
themeConfig: {
siteTitle: 'Memoh',
sidebar: {
'/': [
{
text: 'Overview',
link: '/index.md'
},
{
text: 'About Memoh',
link: '/getting-started.md'
},
{
text: 'Installation',
items: [
{
text: 'Docker',
link: '/installation/docker.md'
},
{
text: 'config.toml',
link: '/installation/config-toml.md'
}
]
},
{
text: 'Getting Started',
items: [
{
text: 'Provider and Model',
link: '/getting-started/provider-and-model.md'
}
]
},
{
text: 'Concepts',
items: [
{
text: 'Overview',
link: '/concepts/index.md'
},
{
text: 'Bot',
link: '/concepts/bot.md'
},
{
text: 'Provider and Model',
link: '/concepts/provider-and-model.md'
},
{
text: 'Schedule',
link: '/concepts/schedule.md'
},
{
text: 'Memory',
link: '/concepts/memory.md'
},
{
text: 'Channel',
link: '/concepts/channel.md'
},
{
text: 'Container',
link: '/concepts/container.md'
},
{
text: 'MCP',
link: '/concepts/mcp.md'
},
{
text: 'Subagents',
link: '/concepts/subagents.md'
},
{
text: 'Skills',
link: '/concepts/skills.md'
},
{
text: 'Conversation and History',
link: '/concepts/conversation-and-history.md'
}
]
},
{
text: 'CLI',
items: [
{
text: 'Overview',
link: '/cli/index.md'
},
{
text: 'authentication',
link: '/cli/auth.md'
},
{
text: 'config',
link: '/cli/config.md'
},
{
text: 'provider',
link: '/cli/provider.md'
},
{
text: 'model',
link: '/cli/model.md'
},
{
text: 'bot',
link: '/cli/bot.md'
},
{
text: 'channel',
link: '/cli/channel.md'
},
{
text: 'schedule',
link: '/cli/schedule.md'
},
{
text: 'chat',
link: '/cli/chat.md'
}
]
}
],
'/zh/': [
{
text: '文档总览',
link: '/zh/index.md'
}
]
'/blogs/': blogs,
'/': en,
'/zh/': zh,
},
nav: [
{ text: 'Guides', link: '/' },
{ text: 'Blogs', link: '/blogs/' },
],
logo: {
src: '/logo.png',
alt: 'Memoh'
+122
View File
@@ -0,0 +1,122 @@
export const en = [
{
text: 'Overview',
link: '/index.md'
},
{
text: 'About Memoh',
link: '/getting-started.md'
},
{
text: 'Installation',
items: [
{
text: 'Docker',
link: '/installation/docker.md'
},
{
text: 'config.toml',
link: '/installation/config-toml.md'
}
]
},
{
text: 'Getting Started',
items: [
{
text: 'Provider and Model',
link: '/getting-started/provider-and-model.md'
}
]
},
{
text: 'Concepts',
items: [
{
text: 'Overview',
link: '/concepts/index.md'
},
{
text: 'Bot',
link: '/concepts/bot.md'
},
{
text: 'Provider and Model',
link: '/concepts/provider-and-model.md'
},
{
text: 'Schedule',
link: '/concepts/schedule.md'
},
{
text: 'Memory',
link: '/concepts/memory.md'
},
{
text: 'Channel',
link: '/concepts/channel.md'
},
{
text: 'Container',
link: '/concepts/container.md'
},
{
text: 'MCP',
link: '/concepts/mcp.md'
},
{
text: 'Subagents',
link: '/concepts/subagents.md'
},
{
text: 'Skills',
link: '/concepts/skills.md'
},
{
text: 'Conversation and History',
link: '/concepts/conversation-and-history.md'
}
]
},
{
text: 'CLI',
items: [
{
text: 'Overview',
link: '/cli/index.md'
},
{
text: 'authentication',
link: '/cli/auth.md'
},
{
text: 'config',
link: '/cli/config.md'
},
{
text: 'provider',
link: '/cli/provider.md'
},
{
text: 'model',
link: '/cli/model.md'
},
{
text: 'bot',
link: '/cli/bot.md'
},
{
text: 'channel',
link: '/cli/channel.md'
},
{
text: 'schedule',
link: '/cli/schedule.md'
},
{
text: 'chat',
link: '/cli/chat.md'
}
]
}
]
+6
View File
@@ -0,0 +1,6 @@
export const zh = [
{
text: '文档总览',
link: '/zh/index.md'
}
]
+78
View File
@@ -0,0 +1,78 @@
---
title: Introduction to Memoh - The Case for an Always-On, Containerized Home Agent
author: Team Memoh
---
# Introduction to Memoh - The Case for an Always-On, Containerized Home Agent
## Overview
We enter 2026 with a familiar tension: models get smarter every quarter, but the “agent experience” still breaks on context, latency, privacy, and real-world workflows. Over the past year, We kept circling three questions:
- Where does the capability boundary of agents actually sit?
- Whats the real value of long context?
- What hardware form factor makes “always-on, personal AI” feel natural?
Memoh is our attempt to turn those questions into something buildable—not a manifesto, but a system that can survive contact with reality.
## Story Time
Time travels fast. Somewhere between “Ill remember this” and “wait, why did we decide that?”, a year disappears.
Thats the annoying part of building: most progress doesnt feel like progress while its happening. Its just a stream of small choices, half-finished threads, late-night fixes, and the occasional moment that actually clicks. The kind of moment where you sit back and think: okay—this is real.
Around the same time, I noticed something else: the internet started to feel smoother—and worse.
Text got cleaner, longer, more polite, more… empty. You could smell when something was generated: low information density, too many metaphors, too much agreement, not enough stakes.
I caught myself doing it too.
So I started forcing a constraint: say it plainly. Keep the density. Dont inflate. Dont hide behind style. If something mattered, anchor it to a real moment, a real trade-off, a real cost paid.
Because the thing LLMs cant give you is not “intelligence.” Its weight. The feeling that a human actually stood somewhere in time and wrote from that position.
Thats when I realized what I wanted wasnt “an AI that can talk.” I wanted an AI that can live with you—quietly, continuously, accumulating context without turning your life into content sludge.
Phones were out first instinct—it's personal, powerful, always there. But mobile OS is closed: without OEM privileges you can build an app, not ambient infrastructure.
So We looked for the always-on node every home already has: the router (conceptually). Then the economics clash—router-class hardware cant carry memory, RAG, tools, and multi-user agents. The device evolves: more RAM/storage, a screen, mic/speaker, tiny battery for take out, portable form.
Eventually it stops being a router. It becomes a new category: a home agent base layer.
## What
Memoh is a containerized home/studio AI base layer: cloud-grade model capability paired with local-first memory (knowledge base, RAG/search, conversation history) that stays under your control.
## Why
Long-context models raise the ceiling for agents—but they also make “fully local” expensive and “fully cloud” uncomfortable. People dont want to re-brief AI every day, and they dont want their durable context trapped in someone elses feed. Containerization makes Memoh portable, reproducible, and safe to run as always-on infrastructure—so continuity becomes cheap, private, and dependable.
## How
We run Memoh as a containerized stack: isolated services for storage (files/DB/vector index), retrieval, tool/runtime execution, and the control plane. Inference calls cloud APIs when you need frontier capability; durable memory and indexing stay local. The device acts as an always-on node (router-like, not a router) serving multiple users with strict boundaries: sharing is explicit, private context remains private, and everything is deployable/upgradable as versioned containers.
## Features
- **Multi-bot Management**: Create multiple bots; humans and bots, or bots with each other, can chat privately, in groups, or collaborate.
![Multi-bot Management](/blogs/2026-02-16/01-multi-bots.png)
- **Containerized**: Each bot runs in its own isolated container. Bots can freely execute commands, edit files, and access the network within their containers—like having their own computer.
![Containerized](/blogs/2026-02-16/02-containerized.png)
- **Memory Engineering**: Every chat is stored in the database, with the last 24 hours of context loaded by default. Each conversation turn is stored as memory and can be retrieved by bots through semantic search.
![Memory Engineering](/blogs/2026-02-16/03-memory-engineering.png)
- **Various Platforms**: Supports Telegram, Lark (Feishu), and more.
- **Simple and Easy to Use**: Configure bots and settings for Provider, Model, Memory, Channel, MCP, and Skills through a graphical interface—no coding required to set up your own AI bot.
- **Scheduled Tasks**: Schedule tasks with cron expressions to run commands at specified times.
- More...
## Compare to OpenClaw
We Shared core belief: both Memoh and OpenClaw treat the agent as more than a chatbox—we give the LLM a playground: a real environment where it can remember, use tools, and iterate.
Where Memoh differs:
- Lighter and Faster: built as home/studio infrastructure, can be held in the edge device
- Containerized by default: each bot gets an isolated container (files/commands/network/jobs).
- Hybrid split: cloud inference, local-first memory + indexing.
- Multi-user first: explicit sharing and privacy boundaries, support a2a (Agent2Agent).
- Sustainable: have an experienced team and confidence to push forward and build it.
## Conclusion
Memoh is built for one thing: always-on continuity—an AI that stays online, and a memory that stays yours.
We keep frontier inference in the cloud, keep durable context local, and run everything as a containerized, always-on stack. If you want an agent that feels less like an app and more like home infrastructure, thats the bet Memoh is making.
Furthermore, we will continue to operate and permanently open source it, permanently open-source Memoh, making it a product with long impact.
+5
View File
@@ -0,0 +1,5 @@
# Blogs
This section contains the latest blogs about Memoh.
- [Introduction (Feb 2026)](/blogs/2026-02-16.md)
Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 263 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 230 KiB