Skip to content

Chapter 2: Persistent State

In Chapter 1, we built all our blockchain primitives in memory. But a blockchain that forgets everything when you restart it isn’t very useful. In this chapter, we’ll build the storage layer — the persistent backbone that remembers accounts, blocks, and contract state.

By the end of this chapter, you’ll have implemented:

ModulePurpose
db.rssled database wrapper with serialization
state.rsWorld state (accounts, balances, nonces)
chain.rsBlock storage and chain head tracking

Think of the storage layer as a bank’s ledger system:

Blockchain StorageBank Equivalent
World StateThe master ledger of all accounts
Account RecordIndividual account file (balance, transaction count)
Block StorageDaily transaction batch records
Contract StorageSafe deposit boxes (contract-specific data)
State RootDaily audit checksum (“everything balances”)

A bank doesn’t keep all records in the teller’s head — they’re written to a database. If the bank’s systems restart, all account balances are still there. That’s what we’re building.


Before we write code, let’s understand our database choice.

TypeExamplesHow It Works
Embeddedsled, RocksDB, SQLiteLibrary linked into your app; no separate process
Client-ServerPostgreSQL, MySQL, MongoDBSeparate daemon; your app connects over network

For a blockchain node, embedded is better:

  • Single process = simpler deployment
  • No network overhead between app and database
  • Database lives and dies with the node
TypeData ModelQuery StyleBest For
Key-Valuekey → bytesPoint lookupsSimple, fast access patterns
RelationalTables, rows, columnsSQL queriesComplex relationships, joins

Blockchain state is inherently key-value:

  • address → account
  • block_hash → block
  • (contract, slot) → value

We don’t need JOINs or complex queries — just fast lookups and writes.

DatabaseLanguageProsCons
sledPure RustZero dependencies, great Rust APIYounger than alternatives
RocksDBC++Battle-tested (used by Ethereum)C++ bindings, complex tuning
LevelDBC++Simple, provenOlder, fewer features
SQLiteCSQL flexibilityOverkill for key-value
ScenarioBetter Choice
Production Ethereum clientRocksDB (proven at scale)
Learning projectsled (simpler, pure Rust)
Need SQL queriesSQLite
Distributed databaseDon’t — each node has its own storage

For minichain, sled is perfect: it’s simple, fast, and lets us focus on blockchain concepts rather than database operations.


Let’s build a thin wrapper around sled that handles serialization and provides a clean API.

crates/storage/src/db.rs
use sled::Db;
use std::path::Path;
use thiserror::Error;
#[derive(Error, Debug)]
pub enum StorageError {
#[error("Database error: {0}")]
Database(#[from] sled::Error),
#[error("Serialization error")]
Serialization,
#[error("Key not found: {0}")]
NotFound(String),
}
pub type Result<T> = std::result::Result<T, StorageError>;
/// Wrapper around sled database with serialization helpers.
pub struct Storage {
db: Db,
}
impl Storage {
/// Open a database at the given path.
pub fn open<P: AsRef<Path>>(path: P) -> Result<Self> {
let db = sled::open(path)?;
Ok(Self { db })
}
/// Open an in-memory database (for testing).
pub fn open_temporary() -> Result<Self> {
let db = sled::Config::new().temporary(true).open()?;
Ok(Self { db })
}
}
use rkyv::{Archive, Deserialize, Serialize};
use rkyv::ser::serializers::AllocSerializer;
use rkyv::de::deserializers::SharedDeserializeMap;
impl Storage {
/// Store a serializable value.
pub fn put<V>(&self, key: impl AsRef<[u8]>, value: &V) -> Result<()>
where
V: Archive + Serialize<AllocSerializer<256>>,
{
let bytes = rkyv::to_bytes::<_, 256>(value)
.map_err(|_| StorageError::Serialization)?;
self.db.insert(key, bytes.as_slice())?;
Ok(())
}
/// Retrieve and deserialize a value.
pub fn get<V>(&self, key: impl AsRef<[u8]>) -> Result<Option<V>>
where
V: Archive,
V::Archived: Deserialize<V, SharedDeserializeMap>,
{
match self.db.get(key)? {
Some(bytes) => {
let value = rkyv::from_bytes::<V>(&bytes)
.map_err(|_| StorageError::Serialization)?;
Ok(Some(value))
}
None => Ok(None),
}
}
/// Retrieve a value, returning error if not found.
pub fn get_or_err<V>(&self, key: impl AsRef<[u8]> + std::fmt::Debug + Clone) -> Result<V>
where
V: Archive,
V::Archived: Deserialize<V, SharedDeserializeMap>,
{
self.get(key.clone())?
.ok_or_else(|| StorageError::NotFound(format!("{:?}", key)))
}
/// Delete a key.
pub fn delete(&self, key: impl AsRef<[u8]>) -> Result<()> {
self.db.remove(key)?;
Ok(())
}
/// Check if a key exists.
pub fn contains(&self, key: impl AsRef<[u8]>) -> Result<bool> {
Ok(self.db.contains_key(key)?)
}
}

A key-value database stores bytes, not Rust structs. We can’t just write:

db.insert("alice", account); // Won't work — account is a struct, not bytes

Serialization converts structured data (structs, enums, vectors) into a flat byte sequence that can be stored or transmitted. Deserialization reverses the process.

Account { balance: 1000, nonce: 5 }
↓ serialize
[0x00, 0x00, 0x03, 0xE8, 0x00, 0x00, 0x00, 0x05, ...]
↓ store in database
db.insert("alice", bytes)
↓ retrieve later
db.get("alice") → bytes
↓ deserialize
Account { balance: 1000, nonce: 5 }
FormatHuman ReadableSizeSpeedZero-CopySelf-Describing
JSONYesLargeSlowNoYes
bincodeNoCompactFastNoNo
MessagePackNoCompactFastNoYes
rkyvNoCompactFastestYesNo
ProtobufNoCompactFastNoNo (needs schema)

Self-describing means the encoded data includes field names/types, so you can decode it without knowing the original struct definition.

Zero-copy means you can access data directly from the serialized bytes without allocating new memory or copying data.

For storage, we don’t need human readability — we need speed and compactness. rkyv (pronounced “archive”) is the fastest serialization framework for Rust:

  • Zero-copy deserialization — access data directly from stored bytes without copying
  • Pure Rust — no external dependencies or code generation
  • Compact — no field names, no type tags, just raw data
  • Blazingly fast — benchmarks show 2-10x faster than alternatives

To use rkyv, your structs need to derive the rkyv traits:

use rkyv::{Archive, Deserialize, Serialize};
#[derive(Archive, Deserialize, Serialize, Debug, Clone)]
#[archive(check_bytes)] // Enable validation for untrusted data
pub struct Account {
pub nonce: u64,
pub balance: u64,
pub code_hash: Option<Hash>,
pub storage_root: Hash,
}

Different types of data need different key prefixes to avoid collisions:

impl Storage {
/// Create a prefixed key for accounts.
pub fn account_key(address: &Address) -> Vec<u8> {
let mut key = b"account:".to_vec();
key.extend_from_slice(&address.0);
key
}
/// Create a prefixed key for blocks by height.
pub fn block_height_key(height: u64) -> Vec<u8> {
format!("block:height:{}", height).into_bytes()
}
/// Create a prefixed key for blocks by hash.
pub fn block_hash_key(hash: &Hash) -> Vec<u8> {
let mut key = b"block:hash:".to_vec();
key.extend_from_slice(&hash.0);
key
}
/// Create a prefixed key for contract storage.
pub fn contract_storage_key(contract: &Address, slot: &[u8]) -> Vec<u8> {
let mut key = b"storage:".to_vec();
key.extend_from_slice(&contract.0);
key.push(b':');
key.extend_from_slice(slot);
key
}
}

The world state is the complete snapshot of all accounts at a given point in time. It’s the blockchain’s source of truth.

crates/storage/src/state.rs
use minichain_core::{Account, Address, Hash, hash};
use crate::db::{Storage, Result, StorageError};
/// Manages the world state (all accounts).
pub struct StateManager<'a> {
storage: &'a Storage,
}
impl<'a> StateManager<'a> {
pub fn new(storage: &'a Storage) -> Self {
Self { storage }
}
}
impl<'a> StateManager<'a> {
/// Create or update an account.
pub fn put_account(&self, address: &Address, account: &Account) -> Result<()> {
let key = Storage::account_key(address);
self.storage.put(key, account)
}
/// Get an account, returning default (empty) if not found.
pub fn get_account(&self, address: &Address) -> Result<Account> {
let key = Storage::account_key(address);
Ok(self.storage.get::<_, Account>(key)?.unwrap_or_default())
}
/// Check if an account exists.
pub fn account_exists(&self, address: &Address) -> Result<bool> {
let key = Storage::account_key(address);
self.storage.contains(key)
}
}

In blockchain, every address implicitly exists with zero balance and zero nonce. You can send Mini Coins to any address — it doesn’t need to be “created” first.

// This is fine — Bob doesn't need to exist yet
transfer(alice, bob, 100);
// After transfer, Bob's account is created automatically
assert_eq!(get_balance(bob), 100);

Contracts have associated bytecode. We store it separately (not in the Account struct) to avoid loading large bytecode for simple balance checks:

impl<'a> StateManager<'a> {
/// Store contract bytecode.
pub fn put_code(&self, code_hash: &Hash, code: &[u8]) -> Result<()> {
let key = format!("code:{}", code_hash.to_hex());
self.storage.put(key, &code.to_vec())
}
/// Retrieve contract bytecode.
pub fn get_code(&self, code_hash: &Hash) -> Result<Option<Vec<u8>>> {
let key = format!("code:{}", code_hash.to_hex());
self.storage.get(key)
}
/// Deploy a contract: store code and create account.
pub fn deploy_contract(
&self,
address: &Address,
code: &[u8],
initial_balance: u64,
) -> Result<Hash> {
// Hash the code
let code_hash = hash(code);
// Store the bytecode
self.put_code(&code_hash, code)?;
// Create the contract account
let account = Account::new_contract(code_hash);
let mut account = account;
account.balance = initial_balance;
self.put_account(address, &account)?;
Ok(code_hash)
}
}

Let’s trace what happens when Alice sends 100 Mini Coins to Bob:

Initial State:

Accounts:
├─ Alice (0xAA...): balance=1000, nonce=5
├─ Bob (0xBB...): balance=500, nonce=0
└─ State Root: 0x7d3f...

Operation 1: state.transfer(&alice, &bob, 100)

Step 1 - Load Alice’s account:

let mut alice_account = state.get_account(&alice)?;
// → Account { balance: 1000, nonce: 5 }

Step 2 - Subtract balance (with check):

if alice_account.balance < 100 {
return Err(StorageError::InsufficientBalance {
address: alice,
required: 100,
available: alice_account.balance,
}); // ✗ Would fail here if insufficient
}
alice_account.balance -= 100;
// → Account { balance: 900, nonce: 5 }

Step 3 - Load Bob’s account:

let mut bob_account = state.get_account(&bob)?;
// → Account { balance: 500, nonce: 0 } (or default if new)

Step 4 - Add balance:

bob_account.balance += 100;
// → Account { balance: 600, nonce: 0 }

Step 5 - Commit both accounts:

state.put_account(&alice, &alice_account)?;
state.put_account(&bob, &bob_account)?;

Final State:

Accounts:
├─ Alice (0xAA...): balance=900, nonce=5 [−100]
├─ Bob (0xBB...): balance=600, nonce=0 [+100]
└─ State Root: 0x9e2a... [changed]

What Happens to Storage Keys?

When we call put_account, here’s what actually gets written to the database:

Before transfer:
Key: "account:\xAA..." → bytes: [balance=1000, nonce=5, ...]
Key: "account:\xBB..." → bytes: [balance=500, nonce=0, ...]
After transfer:
Key: "account:\xAA..." → bytes: [balance=900, nonce=5, ...] ← Updated
Key: "account:\xBB..." → bytes: [balance=600, nonce=0, ...] ← Updated

Each account is serialized independently. sled handles writing the updated bytes to disk, along with write-ahead logging for crash safety.

State Root Recomputation:

The state root changes because Alice’s account changed:

Before:
Hash(Alice data) = 0xABCD...
Hash(Bob data) = 0xEF01...
Merkle Root = Hash(0xABCD... + 0xEF01... + ...) = 0x7d3f...
After:
Hash(Alice data) = 0x1234... ← Changed (balance decreased)
Hash(Bob data) = 0x5678... ← Changed (balance increased)
Merkle Root = Hash(0x1234... + 0x5678... + ...) = 0x9e2a... ← Different!

This is why every state change produces a new state root — even a single byte difference in any account cascades up the Merkle tree.


Blocks need to be retrievable by both height (block number) and hash (unique identifier).

crates/storage/src/chain.rs
use minichain_core::{Block, Hash};
use crate::db::{Storage, Result};
/// Manages block storage and chain state.
pub struct ChainStore<'a> {
storage: &'a Storage,
}
impl<'a> ChainStore<'a> {
pub fn new(storage: &'a Storage) -> Self {
Self { storage }
}
}

When we store a block, we create two index entries. Why two?

Different use cases need different lookup keys:

Use CaseLookup ByExample
Verify a transaction proofHash”Show me block 0xABC123...
Display block explorerHeight”Show me block #1000”
Sync from a peerHash”Send me block 0xDEF456...
Get the latest N blocksHeight”Show me blocks #995 to #1000”

Why hash as primary, height as secondary?

  • Hash is immutable — a block’s hash never changes. It’s the block’s true identity.
  • Height can change — during a chain reorganization (reorg), block #100 might be replaced by a different block. The old block still exists (same hash), but it’s no longer at height 100.

By storing the actual block data by hash, we never lose blocks during reorgs. We just update the height→hash pointer.

What do the keys look like?

The block_hash_key and block_height_key functions (defined in section 2.2) create namespaced database keys:

Block hash: 0xABCD1234...
↓ block_hash_key()
DB key: "block:hash:0xABCD1234..." → stores: [full block data]
Block height: 42
↓ block_height_key()
DB key: "block:height:42" → stores: 0xABCD1234... (just the hash)

The prefix (block:hash:, block:height:) prevents collisions with other data types (accounts, contract storage, etc.) in the same database.

impl<'a> ChainStore<'a> {
/// Store a block with both height and hash indexes.
pub fn put_block(&self, block: &Block) -> Result<()> {
let hash = block.hash(); // e.g., 0xABCD1234...
// Primary: "block:hash:0xABCD1234..." → [block data]
let hash_key = Storage::block_hash_key(&hash);
self.storage.put(&hash_key, block)?;
// Secondary: "block:height:42" → 0xABCD1234...
let height_key = Storage::block_height_key(block.header.height);
self.storage.put(&height_key, &hash)?;
Ok(())
}
}
impl<'a> ChainStore<'a> {
/// Get a block by its hash.
pub fn get_block_by_hash(&self, hash: &Hash) -> Result<Option<Block>> {
let key = Storage::block_hash_key(hash);
self.storage.get(key)
}
}

The chain head is the latest block. We need to track it for:

  • Adding new blocks
  • Answering “what’s the current state?”
  • Syncing with other nodes
const CHAIN_HEAD_KEY: &[u8] = b"chain:head";
const CHAIN_HEIGHT_KEY: &[u8] = b"chain:height";
impl<'a> ChainStore<'a> {
/// Get the current chain head hash.
pub fn get_head(&self) -> Result<Option<Hash>> {
self.storage.get(CHAIN_HEAD_KEY)
}
/// Get the current chain height.
pub fn get_height(&self) -> Result<u64> {
Ok(self.storage.get::<_, u64>(CHAIN_HEIGHT_KEY)?.unwrap_or(0))
}
/// Update the chain head (called after adding a new block).
pub fn set_head(&self, hash: &Hash, height: u64) -> Result<()> {
self.storage.put(CHAIN_HEAD_KEY, hash)?;
self.storage.put(CHAIN_HEIGHT_KEY, &height)?;
Ok(())
}
/// Get the latest block.
pub fn get_latest_block(&self) -> Result<Option<Block>> {
match self.get_head()? {
Some(hash) => self.get_block_by_hash(&hash),
None => Ok(None),
}
}
}

The genesis block is special — it bootstraps the chain:

impl<'a> ChainStore<'a> {
/// Initialize the chain with a genesis block.
pub fn init_genesis(&self, genesis: &Block) -> Result<()> {
// Verify it's actually a genesis block
if genesis.header.height != 0 {
return Err(StorageError::InvalidGenesis(
"Genesis block must have height 0".into()
));
}
// Check if we already have a genesis
if self.get_height()? > 0 {
return Err(StorageError::InvalidGenesis(
"Chain already initialized".into()
));
}
// Store the genesis block
self.put_block(genesis)?;
// Set it as the head
self.set_head(&genesis.hash(), 0)?;
Ok(())
}
/// Check if the chain is initialized.
pub fn is_initialized(&self) -> Result<bool> {
Ok(self.get_head()?.is_some())
}
}

Smart contracts need their own persistent key-value storage. Each contract has an isolated namespace.

If you’re familiar with how operating systems manage process memory, here’s how contract storage compares:

Process MemoryContract StorageSimilarity
StackNot similarStack is temporary (function-scoped, LIFO). Contract storage persists forever.
HeapPartially similarBoth are random-access. But heap dies when process exits; contract storage survives.
Memory-mapped filesMore similarPersistent, survives restarts, random-access.
Virtual address spaceMost similarEach process thinks it owns address 0x1000. Each contract thinks it owns slot 0. Isolated namespaces.
Contract A: storage[0] = 100
Contract B: storage[0] = 200
These are different! Each contract has its own "slot 0".

Under the hood, we achieve isolation by prefixing the key with the contract address:

Contract A (0xAAAA...): storage[0] = 100
→ DB key: "storage:0xAAAA...:0" → 100
Contract B (0xBBBB...): storage[0] = 200
→ DB key: "storage:0xBBBB...:0" → 200

Same slot number, different database keys. No collision.

impl<'a> StateManager<'a> {
/// Read from contract storage.
pub fn storage_get(
&self,
contract: &Address,
key: &[u8],
) -> Result<Option<Vec<u8>>> {
let db_key = Storage::contract_storage_key(contract, key);
self.storage.get(db_key)
}
/// Write to contract storage.
pub fn storage_put(
&self,
contract: &Address,
key: &[u8],
value: &[u8],
) -> Result<()> {
let db_key = Storage::contract_storage_key(contract, key);
self.storage.put(db_key, &value.to_vec())
}
/// Delete from contract storage.
pub fn storage_delete(
&self,
contract: &Address,
key: &[u8],
) -> Result<()> {
let db_key = Storage::contract_storage_key(contract, key);
self.storage.delete(db_key)
}
}

In Ethereum and our VM, storage keys are typically 32-byte slots:

/// Storage slot (32 bytes, like Ethereum).
pub type StorageSlot = [u8; 32];
impl<'a> StateManager<'a> {
/// Read a 32-byte slot from contract storage.
pub fn sload(&self, contract: &Address, slot: &StorageSlot) -> Result<[u8; 32]> {
match self.storage_get(contract, slot)? {
Some(bytes) if bytes.len() == 32 => {
let mut result = [0u8; 32];
result.copy_from_slice(&bytes);
Ok(result)
}
Some(_) => Err(StorageError::InvalidStorageValue),
None => Ok([0u8; 32]), // Uninitialized slots are zero
}
}
/// Write a 32-byte value to contract storage.
pub fn sstore(
&self,
contract: &Address,
slot: &StorageSlot,
value: &[u8; 32],
) -> Result<()> {
self.storage_put(contract, slot, value)
}
}

The state root is a single hash that commits to the entire world state. It goes in the block header.

From Chapter 1:

Use CaseHow state_root Helps
ConsensusNodes must agree on the result, not just the transactions
Light clientsVerify account state without downloading everything
State syncNew nodes download current state, verify against state_root

We compute a merkle root over all accounts by scanning every key that starts with "account:".

What does scan_prefix return?

When we call scan_prefix(b"account:"), sled returns an iterator over all key-value pairs where the key starts with that prefix:

Iteration 1:
key: b"account:\xAA\xBB\xCC..." (prefix + Alice's address bytes)
value: b"\x00\x00\x03\xE8..." (rkyv-serialized Account struct)
Iteration 2:
key: b"account:\xDD\xEE\xFF..." (prefix + Bob's address bytes)
value: b"\x00\x00\x01\xF4..." (rkyv-serialized Account struct)
... and so on for every account in the database

We hash each (key, value) pair together to create a leaf hash for the merkle tree:

use minichain_core::{Hash, merkle_root, hash};
impl<'a> StateManager<'a> {
/// Compute the state root from all accounts.
pub fn compute_state_root(&self) -> Result<Hash> {
// Collect all account hashes
let mut account_hashes = Vec::new();
// Iterate over all accounts (prefix scan)
let prefix = b"account:";
for result in self.storage.db.scan_prefix(prefix) {
let (key, value) = result?;
// Hash the key-value pair together
let pair_hash = hash(&[key.as_ref(), value.as_ref()].concat());
account_hashes.push(pair_hash);
}
// Sort for deterministic ordering
account_hashes.sort_by(|a, b| a.0.cmp(&b.0));
// Compute merkle root
Ok(merkle_root(&account_hashes))
}
}

How accounts are organized into a Merkle tree for the state root:

Accounts in Storage:
┌─────────────────────────────────────────┐
│ Key: "account:0xAA..." │
│ Value: Account { balance: 1000, ... } │
├─────────────────────────────────────────┤
│ Key: "account:0xBB..." │
│ Value: Account { balance: 500, ... } │
├─────────────────────────────────────────┤
│ Key: "account:0xCC..." │
│ Value: Account { balance: 250, ... } │
└─────────────────────────────────────────┘
State Trie Construction:
State Root
┌───────────────┼───────────────┐
│ │
H(A_key + A_val) H(B,C combined)
│ │
0xAA... ┌──────┴──────┐
balance=1000 │ │
H(B_key+B_val) H(C_key+C_val)
│ │
0xBB... 0xCC...
balance=500 balance=250
Merkle Root Calculation:
1. Hash each (key, value) pair
2. Sort hashes (deterministic ordering)
3. Build binary tree bottom-up
4. Root hash = State Root (stored in block header)

What Changes the State Root:

Initial State: After Alice → Bob Transfer:
State Root: 0x7d3f... State Root: 0x9e2a... [changed]
│ │
┌────┴────┐ ┌────┴────┐
│ │ │ │
Alice Bob Alice Bob
bal=1000 bal=500 bal=900 bal=600
↑ ↑
(−100) (+100)
Because Alice's account changed → her hash changed → branch hash changed → root changed

Hash tables don’t preserve insertion order. Without sorting, the same set of accounts could produce different merkle roots depending on iteration order. Sorting ensures determinism — every node computes the same root.

When executing a block, multiple accounts change. We need atomic updates — either all changes apply, or none do:

impl Storage {
/// Apply multiple operations atomically.
pub fn batch<F>(&self, f: F) -> Result<()>
where
F: FnOnce(&sled::Batch) -> Result<()>,
{
let batch = sled::Batch::default();
f(&batch)?;
self.db.apply_batch(batch)?; // ← Atomicity happens HERE
Ok(())
}
}

In practice, block execution would:

// Execute all transactions
let state_changes = execute_block(&block)?;
// Apply all changes atomically
storage.batch(|batch| {
for (address, account) in state_changes {
let key = Storage::account_key(&address);
let bytes = rkyv::to_bytes::<_, 256>(&account)
.map_err(|_| StorageError::Serialization)?;
batch.insert(key, bytes.as_slice());
}
Ok(())
})?;

Here’s how the components work together:

crates/storage/src/lib.rs
pub mod db;
pub mod state;
pub mod chain;
pub use db::{Storage, StorageError, Result};
pub use state::StateManager;
pub use chain::ChainStore;
use minichain_storage::{Storage, StateManager, ChainStore};
use minichain_core::{Account, Block, Address, Keypair};
fn main() -> Result<()> {
// Open database
let storage = Storage::open("./blockchain_data")?;
// Initialize managers
let state = StateManager::new(&storage);
let chain = ChainStore::new(&storage);
// Check if we need to create genesis
if !chain.is_initialized()? {
let authority = Keypair::generate();
let genesis = Block::genesis(authority.address());
chain.init_genesis(&genesis)?;
println!("Genesis block created!");
}
// Create an account
let alice = Keypair::generate();
state.set_balance(&alice.address(), 1_000_000)?;
// Query state
let balance = state.get_balance(&alice.address())?;
println!("Alice's balance: {}", balance);
// Get chain info
let height = chain.get_height()?;
let head = chain.get_head()?.unwrap();
println!("Chain height: {}, head: {}", height, head.to_hex());
Ok(())
}

We’ve built the persistence layer:

ComponentWhat It Does
Storagesled database wrapper with serialization
StateManagerAccount CRUD, balances, nonces, contract storage
ChainStoreBlock storage, height/hash indexes, chain head
State RootMerkle root over all accounts for verification
PatternWhy
Key prefixes (account:, block:)Namespace isolation, prefix scanning
Separate code storageFast account lookups, deduplication
Atomic batch updatesConsistency after crashes
State root computationConsensus, light clients, state sync
Terminal window
$ cargo test -p minichain-storage
running 12 tests
test db::tests::test_open_temporary ... ok
test db::tests::test_put_get ... ok
test state::tests::test_account_crud ... ok
test state::tests::test_balance_operations ... ok
test chain::tests::test_genesis_init ... ok
test chain::tests::test_block_by_height ... ok
# ... all 12 tests pass

With persistence in place, we’re ready to build the virtual machine in Chapter 3. The VM will:

  • Execute bytecode stored in contracts
  • Read/write to the storage layer via SLOAD/SSTORE
  • Track gas consumption
  • Return execution results

The storage layer we built will serve as the VM’s persistent backend — every SSTORE writes here, every SLOAD reads from here.