State That Became Infrastructure
Every system accumulates small state that nobody planned to make official: feature flags, webhook receipts, memoized job outputs, one-time tokens, temporary controls, and coordination values. The risk is not that JSON is a bad format. The risk is that the state becomes operational without names, authority, expiry, or history.
Latch is a small Go service for that class of state. It stores opaque JSON blobs in SQLite, exposes them over HTTP, records writes in an audit log, ships SDKs and examples, and includes an operator shell for inspection and token management.
The real work is the set of boundaries around the store: typed identity, token scopes, TTL-aware reads, migrations, rate limits, metrics, maintenance, local discovery, and a shell that can operate through HTTP or directly against the SQLite file.
The Address Is the Contract
Every entity is addressed by (project, type, key). Type is part of identity, so the same key can safely represent different operational concepts. A note and a summary can share a logical key without colliding. A request with the wrong type gets a miss instead of the wrong blob.
That pivot is visible in the migration that introduced typed identity. It made the address explicit in the schema instead of relying on naming conventions inside string keys.
CREATE TABLE entities_new (
project TEXT NOT NULL,
type TEXT NOT NULL,
key TEXT NOT NULL,
blob BLOB NOT NULL,
expires_at TIMESTAMP,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (project, type, key)
);A Write Is Also Evidence
Create, update, and delete use transactional storage paths that append log rows in the same transaction. This is the difference between a small datastore and an accountable control plane. A mutation is not complete unless the record and the evidence of the mutation move together.
The caveat is important: logs are append-only at write time, but current maintenance prunes old logs after seven days. The useful claim is mutation evidence, not permanent archival history.
func (s *Store) CreateEntity(project, entityType, key string, blob []byte, expiresAt *time.Time) error {
tx, err := s.db.Begin()
if err != nil {
return err
}
defer tx.Rollback()
_, err = tx.Exec(
`INSERT INTO entities (project, type, key, blob, expires_at) VALUES (?, ?, ?, ?, ?)`,
project,
entityType,
key,
blob,
expiresAt,
)
if err != nil {
return err
}
_, err = tx.Exec(
`INSERT INTO logs (project, key, operation, blob) VALUES (?, ?, 'create', ?)`,
project,
key,
blob,
)
if err != nil {
return err
}
return tx.Commit()
}One Binary, Two Operating Modes
latch serve runs the API, maintenance, metrics, optional backups, and optional event forwarding. latch shell can use the HTTP API for normal operations and direct SQLite access for inspection, SQL queries, and token management. Local instance discovery writes per-process instance files so the shell can attach to a running server without manual URL plumbing.
The scope syntax is operationally compact: a target can name a project, optional type, optional key prefix, and permissions. That is the kind of sharp interface small infrastructure needs, because most mistakes come from giving a convenience token too much reach.
func parseScopeSpec(spec string) (store.TokenScope, error) {
parts := strings.SplitN(spec, "=", 2)
target := strings.TrimSpace(parts[0])
perms := strings.ToLower(strings.TrimSpace(parts[1]))
var prefix string
if hash := strings.Index(target, "#"); hash >= 0 {
prefix = target[hash+1:]
target = target[:hash]
}
var typ string
if slash := strings.Index(target, "/"); slash >= 0 {
typ = target[slash+1:]
target = target[:slash]
}
scope := store.TokenScope{
Project: strings.TrimSpace(target),
Type: strings.TrimSpace(typ),
KeyPrefix: strings.TrimSpace(prefix),
}
return scopeFromPermissions(scope, perms)
}Examples Define the Workload
The examples are the best argument for the tool. Feature flags demonstrate typed configuration. A webhook inbox keeps pending and processed receipts in a shared store. Job memoization uses a content hash as the key. One-time token flows show auditability, with the caveat that the current example is not atomic redemption.
That is the right workload boundary: single-process, SQLite-backed operational state with shallow querying, key search, JSON-path equality filters, scoped tokens, and enough inspection to stop random JSON from becoming invisible production infrastructure.
export async function memoizedJob(payload: Record<string, unknown>, ttl = 60) {
const key = createHash("sha256")
.update(JSON.stringify(payload))
.digest("hex");
try {
const cached = await latch.getEntity(key, "job");
const age = Date.now() / 1000 - cached.ts;
if (age < ttl) return cached.result;
const stale = cached.result;
(async () => {
const result = await expensiveJob(payload);
await latch.updateEntity(key, "job", {
result,
ts: Math.floor(Date.now() / 1000),
});
})();
return stale;
} catch (err) {
const result = await expensiveJob(payload);
await latch.createEntity("job", { result, ts: Math.floor(Date.now() / 1000) }, key);
return result;
}
}