feat(zig): native Arboricx bundle parser and C ABI

This commit is contained in:
2026-05-10 21:21:58 -05:00
parent 8a673e282d
commit d7a7a8134c
27 changed files with 5365 additions and 18 deletions

172
AGENTS.md
View File

@@ -4,13 +4,20 @@
## 0. Test Driven Development ## 0. Test Driven Development
Write and discuss tests with the user before implementing any implementation code. Write and discuss tests with the user before working on implementation code. Do not modify existing tests without explicit permission.
## 1. Build & Test ## 1. Build & Test
```bash ```bash
# Tests # Haskell tests (default check)
nix flake check nix flake check
# Zig build
nix build .#tricu-zig
# Zig tests (separate target — not part of nix flake check)
nix build .#tricu-zig-tests
# Full build # Full build
nix build .# nix build .#
``` ```
@@ -32,10 +39,10 @@ nix build .#
| `LToken` | Lexer tokens | | `LToken` | Lexer tokens |
| `Node` / `MerkleHash` | Content-addressed Merkle DAG nodes | | `Node` / `MerkleHash` | Content-addressed Merkle DAG nodes |
### Source modules ### Source modules (Haskell)
| Module | Purpose | | Module | Purpose |
|--------|---------| |--------|---------|
| `Main.hs` | CLI entry point (`cmdargs`), three modes: `repl`, `eval`, `decode` | | `Main.hs` | CLI entry point (`cmdargs`), three modes: `repl`, `eval`, `decode` |
| `Eval.hs` | Interpreter: `evalTricu`, `result`, `evalSingle` | | `Eval.hs` | Interpreter: `evalTricu`, `result`, `evalSingle` |
| `Parser.hs` | Megaparsec parser → `TricuAST` | | `Parser.hs` | Megaparsec parser → `TricuAST` |
@@ -46,13 +53,31 @@ nix build .#
| `ContentStore.hs` | SQLite-backed term persistence | | `ContentStore.hs` | SQLite-backed term persistence |
| `Wire.hs` | Arboricx portable wire format — encode/decode/import/export of Merkle-DAG bundle blobs | | `Wire.hs` | Arboricx portable wire format — encode/decode/import/export of Merkle-DAG bundle blobs |
### Multi-language Arboricx ecosystem
Arboricx is the portable executable-object format used by tricu. The project now includes native parsing and execution in multiple languages:
| Language | Location | Capabilities |
|----------|----------|--------------|
| **Haskell** | `src/Wire.hs`, `src/Research.hs` | Reference implementation — bundle encode/decode, content store, full Tree Calculus reduction |
| **tricu (self-hosted)** | `kernel_run_arboricx_typed.dag` | A self-hosting Arboricx parser/executor written in tricu itself. Used as a kernel inside the Zig host for maximum portability ("cool but useless" — ~3s for `append`) |
| **Zig** | `ext/zig/` | **Production host** — native bundle parser, WHNF reducer, C ABI (`libarboricx.so` / `.a`), CLI (`tricu-zig`), Python FFI support |
| **JavaScript (Node)** | `ext/js/` | Native bundle parser, manifest decoder, Merkle DAG verifier, Tree Calculus reducer, CLI runner |
| **PHP** | `ext/php/` | Tree Calculus reducer, codecs, kernel loader, CLI runner |
All hosts share the same bundle format and Merkle hashing scheme.
### File extensions ### File extensions
- `.hs` - Haskell source - `.hs` - Haskell source
- `.tri` - tricu language source (used in `lib/`, `test/`, `demos/`) - `.tri` - tricu language source (used in `lib/`, `test/`, `demos/`)
- `.arboricx` - Portable executable bundle
- `.dag` - Serialized kernel DAG (used by `gen_kernel.zig` at build time)
## 3. Test Suite ## 3. Test Suite
### Haskell tests
Tests live in `test/Spec.hs` and use **Tasty** + **HUnit**. Tests live in `test/Spec.hs` and use **Tasty** + **HUnit**.
```bash ```bash
@@ -75,11 +100,24 @@ nix flake check
| `elimLambdaSingle` | Lambda elimination: eta reduction, SDef binding, semantics preservation | | `elimLambdaSingle` | Lambda elimination: eta reduction, SDef binding, semantics preservation |
| `stressElimLambda` | Lambda elimination stress test: 200 vars, 800-body curried lambda | | `stressElimLambda` | Lambda elimination stress test: 200 vars, 800-body curried lambda |
### Suggesting tests ### Zig tests
You do not write or modify tests. The user writes tests to constrain your outputs. You must adhere your code to tests or suggest modifications to tests. Run separately via:
If the user gives you explicit permission to implement a test you may proceed. ```bash
nix build .#tricu-zig-tests
```
These are **not** included in `nix flake check`. The test derivation compiles and runs:
| Test | What it covers |
|------|----------------|
| `c_abi_test.c` | Smoke tests — leaf, stem, fork, app, reduce, number/string roundtrip, kernel root |
| `c_abi_append_test.c` | Kernel path — `append.arboricx` with string arguments via Tricu kernel |
| `native_bundle_append_test.c` | Native fast path — `append.arboricx` loaded natively, applied, reduced |
| `native_bundle_id_test.c` | Native fast path — `id.arboricx` |
| `native_bundle_bools_test.c` | Native fast path — `true.arboricx` / `false.arboricx` |
| `python_ffi_test.py` | Python ctypes FFI — tests both kernel and native paths for `id` and `append` |
## 4. tricu Language Quick Reference ## 4. tricu Language Quick Reference
@@ -145,12 +183,75 @@ Portable executable bundles are generated via `Wire.hs`. See `docs/arboricx-bund
TRICU_DB_PATH=/tmp/tricu.db ./result/bin/tricu export -o list_ops.arboricx append TRICU_DB_PATH=/tmp/tricu.db ./result/bin/tricu export -o list_ops.arboricx append
``` ```
## 8. Directory Layout ## 8. Zig Arboricx Host (`ext/zig/`)
The Zig host is a fast implementation for running Arboricx bundles. It provides a native bundle parser and arena-based evaluator.
### Modules
| File | Role |
|------|------|
| `src/main.zig` | CLI entrypoint — default native path, `--kernel` fallback |
| `src/bundle.zig` | Native Arboricx bundle parser — verifies digests, hashes, loads DAG into arena |
| `src/c_abi.zig` | C FFI exports — `arboricx_init`, tree constructors, codecs, reduction, bundle loading |
| `src/reduce.zig` | WHNF reducer (Tree Calculus `apply` rules) |
| `src/arena.zig` | Node arena (`ArrayListUnmanaged`) |
| `src/tree.zig` | `Node` union + iterative `copyTree` |
| `src/codecs.zig` | Number/string/list/bytes encoding + result unwrapping |
| `src/kernel.zig` | Embeds DAG kernel into arena (fallback path only) |
| `src/ternary.zig` | Ternary string parser for Tree Calculus terms |
| `tools/gen_kernel.zig` | Build-time tool: converts `.dag``kernel_embed.zig` |
| `include/arboricx.h` | C header for `libarboricx` |
### C ABI
Key functions:
```c
arb_ctx_t* arboricx_init(void);
uint32_t arb_load_bundle(arb_ctx_t*, const uint8_t* bytes, size_t len, const char* name);
uint32_t arb_load_bundle_default(arb_ctx_t*, const uint8_t* bytes, size_t len);
uint32_t arb_reduce(arb_ctx_t*, uint32_t root, uint64_t fuel);
```
`arb_reduce` evaluates in a **fresh scratch arena** so garbage never accumulates.
### Stack size requirement
Tree Calculus reduction is deeply recursive. Assume a segfault is a memory limitation until proven otherwise.
```bash
ulimit -s 32768 # 32 MB
```
### Performance comparison
| Fixture | Native path | Kernel path (`--kernel`) |
|---------|-------------|--------------------------|
| `append "hello " "world"` | **~0.007 s** | ~3.4 s |
| `id "hello"` | **~0.005 s** | ~0.38 s |
The kernel path is kept as a "cool but useless" fallback — the DAG is tiny (~30 KB) so the cost is negligible.
## 9. Nix Flake Outputs
| Output | Description |
|--------|-------------|
| `packages.default` / `packages.tricu` | Haskell tricu package |
| `packages.tricu-zig` | Zig CLI + `libarboricx.a` + `libarboricx.so` + `arboricx.h` |
| `packages.tricu-zig-tests` | **Separate test target** — C ABI + native bundle + Python FFI tests |
| `packages.tricu-container` | Docker image |
| `checks.default` / `checks.tricu` | Haskell test suite via Tasty/HUnit |
`tricu-zig-tests` is deliberately **not** in `checks` so `nix flake check` remains fast.
## 10. Directory Layout
``` ```
tricu/ tricu/
├── flake.nix # Nix flake: packages, tests, devShell ├── flake.nix # Nix flake: packages, tests, devShell
├── tricu.cabal # Cabal package (used via callCabal2nix) ├── tricu.cabal # Cabal package (used via callCabal2nix)
├── AGENTS.md # This file
├── src/ # Haskell modules ├── src/ # Haskell modules
│ ├── Main.hs │ ├── Main.hs
│ ├── Eval.hs │ ├── Eval.hs
@@ -160,10 +261,11 @@ tricu/
│ ├── REPL.hs │ ├── REPL.hs
│ ├── Research.hs │ ├── Research.hs
│ ├── ContentStore.hs │ ├── ContentStore.hs
│ └── Wire.hs # Arboricx portable wire format │ └── Wire.hs
├── test/ ├── test/
│ ├── Spec.hs # Tasty + HUnit tests │ ├── Spec.hs # Tasty + HUnit tests
│ ├── *.tri # tricu test programs │ ├── *.tri # tricu test programs
│ ├── *.arboricx # Arboricx bundle fixtures
│ └── local-ns/ # Module namespace test files │ └── local-ns/ # Module namespace test files
├── lib/ ├── lib/
│ ├── base.tri │ ├── base.tri
@@ -175,10 +277,52 @@ tricu/
│ ├── toSource.tri │ ├── toSource.tri
│ ├── levelOrderTraversal.tri │ ├── levelOrderTraversal.tri
│ └── patternMatching.tri │ └── patternMatching.tri
── AGENTS.md # This file ── ext/ # Multi-language Arboricx hosts
│ ├── js/ # Node.js bundle parser + reducer
│ │ ├── src/
│ │ │ ├── bundle.js
│ │ │ ├── manifest.js
│ │ │ ├── merkle.js
│ │ │ ├── tree.js
│ │ │ ├── codecs.js
│ │ │ └── cli.js
│ │ └── test/
│ ├── php/ # PHP bundle loader + reducer
│ │ ├── src/
│ │ │ ├── functions.php
│ │ │ ├── codecs.php
│ │ │ ├── kernel.php
│ │ │ └── Tree/
│ │ └── run.php
│ └── zig/ # Zig production host
│ ├── build.zig
│ ├── build.zig.zon
│ ├── kernel_run_arboricx_typed.dag
│ ├── include/arboricx.h
│ ├── src/
│ │ ├── main.zig
│ │ ├── bundle.zig
│ │ ├── c_abi.zig
│ │ ├── codecs.zig
│ │ ├── kernel.zig
│ │ ├── reduce.zig
│ │ ├── arena.zig
│ │ ├── tree.zig
│ │ └── ternary.zig
│ ├── tests/
│ │ ├── c_abi_test.c
│ │ ├── c_abi_append_test.c
│ │ ├── native_bundle_append_test.c
│ │ ├── native_bundle_id_test.c
│ │ ├── native_bundle_bools_test.c
│ │ └── python_ffi_test.py
│ └── tools/
│ └── gen_kernel.zig
└── docs/
└── arboricx-bundle-format.md
``` ```
## 9. Content Store Workflow (Custom DB) ## 11. Content Store Workflow (Custom DB)
The content store location is controlled by the `TRICU_DB_PATH` environment variable. When set, `eval` mode automatically loads all stored terms into the initial environment, so you can call any previously imported/evaluated term by name. The content store location is controlled by the `TRICU_DB_PATH` environment variable. When set, `eval` mode automatically loads all stored terms into the initial environment, so you can call any previously imported/evaluated term by name.
@@ -206,14 +350,16 @@ t> !definitions
Without `TRICU_DB_PATH` set, `eval` uses only the terms defined in the input file(s). Without `TRICU_DB_PATH` set, `eval` uses only the terms defined in the input file(s).
## 10. Development Tips ## 12. Development Tips
- **REPL:** `nix run .#` starts the interactive tricu REPL. - **REPL:** `nix run .#` starts the interactive tricu REPL.
- **Evaluate files:** `nix run .# -- eval -f demos/equality.tri` - **Evaluate files:** `nix run .# -- eval -f demos/equality.tri`
- **Zig host:** `nix build .#tricu-zig` then `./result/bin/tricu-zig <bundle> [args...]`
- **Zig tests:** `nix build .#tricu-zig-tests`
- **GHC options:** `-threaded -rtsopts -with-rtsopts=-N` for parallel runtime. Use `-N` RTS flag for multi-core. - **GHC options:** `-threaded -rtsopts -with-rtsopts=-N` for parallel runtime. Use `-N` RTS flag for multi-core.
- **Upx** is in the devShell for binary compression if needed. - **Upx** is in the devShell for binary compression if needed.
## 11. Viewing Haskell Dependency Docs from Nix ## 13. Viewing Haskell Dependency Docs from Nix
When you need Haddock documentation for a Haskell dependency available in Nixpkgs, build the package's `doc` output directly with `^doc`. When you need Haddock documentation for a Haskell dependency available in Nixpkgs, build the package's `doc` output directly with `^doc`.

13
ext/zig/.gitignore vendored Normal file
View File

@@ -0,0 +1,13 @@
# Zig build artifacts
.zig-cache/
zig-out/
# Generated binaries (keep .c sources, ignore compiled artifacts)
/c_abi_test
/c_abi_append_test
c_abi_append_shared
tests/c_abi_append_test
# Temp files
*.o
*.tmp

67
ext/zig/build.zig Normal file
View File

@@ -0,0 +1,67 @@
const std = @import("std");
pub fn build(b: *std.Build) void {
const target = b.standardTargetOptions(.{});
const optimize = b.standardOptimizeOption(.{});
// -- kernel generator tool (runs on build host) --
const gen_kernel_mod = b.createModule(.{
.root_source_file = b.path("tools/gen_kernel.zig"),
.target = b.graph.host,
.optimize = .ReleaseSafe,
});
const gen_kernel = b.addExecutable(.{
.name = "gen_kernel",
.root_module = gen_kernel_mod,
});
const run_gen_kernel = b.addRunArtifact(gen_kernel);
run_gen_kernel.addFileArg(b.path("kernel_run_arboricx_typed.dag"));
const kernel_embed = run_gen_kernel.addOutputFileArg("kernel_embed.zig");
// -- kernel module shared by exe and lib --
const kernel_mod = b.createModule(.{
.root_source_file = kernel_embed,
});
// -- main CLI executable --
const exe_mod = b.createModule(.{
.root_source_file = b.path("src/main.zig"),
.target = target,
.optimize = optimize,
});
exe_mod.addImport("kernel_embed", kernel_mod);
const exe = b.addExecutable(.{
.name = "tricu-zig",
.root_module = exe_mod,
});
b.installArtifact(exe);
const run_cmd = b.addRunArtifact(exe);
run_cmd.step.dependOn(b.getInstallStep());
const run_step = b.step("run", "Run tricu-zig");
run_step.dependOn(&run_cmd.step);
// -- C ABI static library --
const lib_mod = b.createModule(.{
.root_source_file = b.path("src/c_abi.zig"),
.target = target,
.optimize = optimize,
});
lib_mod.pic = true;
lib_mod.addImport("kernel_embed", kernel_mod);
const static_lib = b.addLibrary(.{
.name = "arboricx",
.root_module = lib_mod,
});
b.installArtifact(static_lib);
// -- C ABI shared library (for dynamic language FFI) --
const shared_lib = b.addLibrary(.{
.name = "arboricx",
.root_module = lib_mod,
.linkage = .dynamic,
});
b.installArtifact(shared_lib);
}

13
ext/zig/build.zig.zon Normal file
View File

@@ -0,0 +1,13 @@
.{
.name = .tricu_zig,
.version = "0.0.1",
.fingerprint = 0xa9aedd8049d1cce9,
.minimum_zig_version = "0.16.0",
.paths = .{
"build.zig",
"build.zig.zon",
"src",
"tools",
"kernels",
},
}

View File

@@ -0,0 +1,54 @@
#ifndef ARBORICX_H
#define ARBORICX_H
#include <stddef.h>
#include <stdint.h>
#ifdef __cplusplus
extern "C" {
#endif
typedef struct arb_ctx arb_ctx_t;
/* Context lifecycle */
arb_ctx_t* arboricx_init(void);
void arboricx_free(arb_ctx_t* ctx);
void arboricx_free_buf(arb_ctx_t* ctx, uint8_t* ptr, size_t len);
/* Tree construction */
uint32_t arb_leaf(arb_ctx_t* ctx);
uint32_t arb_stem(arb_ctx_t* ctx, uint32_t child);
uint32_t arb_fork(arb_ctx_t* ctx, uint32_t left, uint32_t right);
uint32_t arb_app(arb_ctx_t* ctx, uint32_t func, uint32_t arg);
/* Reduction */
uint32_t arb_reduce(arb_ctx_t* ctx, uint32_t root, uint64_t fuel);
/* Codec constructors */
uint32_t arb_of_number(arb_ctx_t* ctx, uint64_t n);
uint32_t arb_of_string(arb_ctx_t* ctx, const char* s);
uint32_t arb_of_bytes(arb_ctx_t* ctx, const uint8_t* bytes, size_t len);
uint32_t arb_of_list(arb_ctx_t* ctx, const uint32_t* items, size_t len);
/* Codec destructors (return 1 on success, 0 on failure) */
int arb_to_number(arb_ctx_t* ctx, uint32_t root, uint64_t* out);
int arb_to_string(arb_ctx_t* ctx, uint32_t root, uint8_t** out_ptr, size_t* out_len);
int arb_to_bytes(arb_ctx_t* ctx, uint32_t root, uint8_t** out_ptr, size_t* out_len);
int arb_to_bool(arb_ctx_t* ctx, uint32_t root, int* out);
/* Result unwrapping (return 1 on success, 0 on failure) */
int arb_unwrap_result(arb_ctx_t* ctx, uint32_t root, int* out_ok, uint32_t* out_value, uint32_t* out_rest);
int arb_unwrap_host_value(arb_ctx_t* ctx, uint32_t root, uint64_t* out_tag, uint32_t* out_payload);
/* Kernel entrypoints */
uint32_t arb_kernel_root(arb_ctx_t* ctx);
/* Native bundle loading (fast path — bypasses the Tricu kernel) */
uint32_t arb_load_bundle(arb_ctx_t* ctx, const uint8_t* bytes, size_t len, const char* name);
uint32_t arb_load_bundle_default(arb_ctx_t* ctx, const uint8_t* bytes, size_t len);
#ifdef __cplusplus
}
#endif
#endif /* ARBORICX_H */

File diff suppressed because it is too large Load Diff

36
ext/zig/src/arena.zig Normal file
View File

@@ -0,0 +1,36 @@
const std = @import("std");
const tree = @import("tree.zig");
pub const Arena = struct {
allocator: std.mem.Allocator,
nodes: std.ArrayList(tree.Node),
pub fn init(allocator: std.mem.Allocator) Arena {
return .{
.allocator = allocator,
.nodes = .empty,
};
}
pub fn deinit(self: *Arena) void {
self.nodes.deinit(self.allocator);
}
pub fn alloc(self: *Arena, node: tree.Node) !u32 {
const idx: u32 = @intCast(self.nodes.items.len);
try self.nodes.append(self.allocator, node);
return idx;
}
pub fn get(self: *Arena, idx: u32) *tree.Node {
return &self.nodes.items[idx];
}
pub fn len(self: *const Arena) u32 {
return @intCast(self.nodes.items.len);
}
pub fn reset(self: *Arena, keep: u32) void {
self.nodes.shrinkRetainingCapacity(keep);
}
};

479
ext/zig/src/bundle.zig Normal file
View File

@@ -0,0 +1,479 @@
const std = @import("std");
const tree = @import("tree.zig");
const Arena = @import("arena.zig").Arena;
pub const Hash = [32]u8;
pub const Error = error{
InvalidMagic,
InvalidVersion,
Truncated,
InvalidManifest,
InvalidNodePayload,
HashMismatch,
ExportNotFound,
MissingChild,
UnexpectedFormat,
DigestMismatch,
OutOfMemory,
};
const Parser = struct {
bytes: []const u8,
pos: usize,
fn init(bytes: []const u8) Parser {
return .{ .bytes = bytes, .pos = 0 };
}
fn remaining(self: *const Parser) usize {
return self.bytes.len - self.pos;
}
fn expect(self: *Parser, n: usize) Error![]const u8 {
if (self.remaining() < n) return error.Truncated;
const result = self.bytes[self.pos .. self.pos + n];
self.pos += n;
return result;
}
fn readU8(self: *Parser) Error!u8 {
const b = try self.expect(1);
return b[0];
}
fn readU16(self: *Parser) Error!u16 {
const b = try self.expect(2);
return std.mem.readInt(u16, b[0..2], .big);
}
fn readU32(self: *Parser) Error!u32 {
const b = try self.expect(4);
return std.mem.readInt(u32, b[0..4], .big);
}
fn readU64(self: *Parser) Error!u64 {
const b = try self.expect(8);
return std.mem.readInt(u64, b[0..8], .big);
}
fn readHash(self: *Parser) Error!Hash {
const b = try self.expect(32);
var h: Hash = undefined;
@memcpy(&h, b);
return h;
}
fn readLengthPrefixedBytes(self: *Parser, allocator: std.mem.Allocator) Error![]const u8 {
const len = try self.readU32();
const bytes = try self.expect(len);
const copy = try allocator.alloc(u8, bytes.len);
@memcpy(copy, bytes);
return copy;
}
};
const SectionEntry = struct {
section_type: u32,
offset: u64,
length: u64,
digest: Hash,
};
fn parseHeader(p: *Parser) Error!struct { major: u16, minor: u16, section_count: u32, dir_offset: u64 } {
const magic = try p.expect(8);
if (!std.mem.eql(u8, magic, "ARBORICX")) return error.InvalidMagic;
const major = try p.readU16();
const minor = try p.readU16();
const section_count = try p.readU32();
_ = try p.readU64(); // flags
const dir_offset = try p.readU64();
if (major != 1) return error.InvalidVersion;
return .{ .major = major, .minor = minor, .section_count = section_count, .dir_offset = dir_offset };
}
fn parseSectionEntries(p: *Parser, count: u32, allocator: std.mem.Allocator) Error![]SectionEntry {
const entries = try allocator.alloc(SectionEntry, count);
errdefer allocator.free(entries);
for (entries) |*entry| {
entry.section_type = try p.readU32();
_ = try p.readU16(); // section_version
_ = try p.readU16(); // section_flags
const compression = try p.readU16();
const digest_alg = try p.readU16();
entry.offset = try p.readU64();
entry.length = try p.readU64();
entry.digest = try p.readHash();
if (compression != 0) return error.UnexpectedFormat;
if (digest_alg != 1) return error.UnexpectedFormat;
}
return entries;
}
fn sha256Digest(data: []const u8) Hash {
var h = std.crypto.hash.sha2.Sha256.init(.{});
h.update(data);
var out: Hash = undefined;
h.final(&out);
return out;
}
fn parseManifest(p: *Parser, allocator: std.mem.Allocator) Error!struct { exports: []Export, roots: []Root } {
const magic = try p.expect(8);
if (!std.mem.eql(u8, magic, "ARBMNFST")) return error.InvalidManifest;
const major = try p.readU16();
_ = try p.readU16(); // minor
if (major != 1) return error.InvalidVersion;
const schema = try p.readLengthPrefixedBytes(allocator);
defer allocator.free(schema);
if (!std.mem.eql(u8, schema, "arboricx.bundle.manifest.v1")) return error.UnexpectedFormat;
const bundle_type = try p.readLengthPrefixedBytes(allocator);
defer allocator.free(bundle_type);
if (!std.mem.eql(u8, bundle_type, "tree-calculus-executable-object")) return error.UnexpectedFormat;
const calc = try p.readLengthPrefixedBytes(allocator);
defer allocator.free(calc);
if (!std.mem.eql(u8, calc, "tree-calculus.v1")) return error.UnexpectedFormat;
const hash_alg = try p.readLengthPrefixedBytes(allocator);
defer allocator.free(hash_alg);
if (!std.mem.eql(u8, hash_alg, "sha256")) return error.UnexpectedFormat;
const hash_domain = try p.readLengthPrefixedBytes(allocator);
defer allocator.free(hash_domain);
if (!std.mem.eql(u8, hash_domain, "arboricx.merkle.node.v1")) return error.UnexpectedFormat;
const payload_type = try p.readLengthPrefixedBytes(allocator);
defer allocator.free(payload_type);
if (!std.mem.eql(u8, payload_type, "arboricx.merkle.payload.v1")) return error.UnexpectedFormat;
const sem = try p.readLengthPrefixedBytes(allocator);
defer allocator.free(sem);
if (!std.mem.eql(u8, sem, "tree-calculus.v1")) return error.UnexpectedFormat;
const eval_mode = try p.readLengthPrefixedBytes(allocator);
defer allocator.free(eval_mode);
if (!std.mem.eql(u8, eval_mode, "normal-order")) return error.UnexpectedFormat;
const abi = try p.readLengthPrefixedBytes(allocator);
defer allocator.free(abi);
if (!std.mem.eql(u8, abi, "arboricx.abi.tree.v1")) return error.UnexpectedFormat;
const cap_count = try p.readU32();
var i: u32 = 0;
while (i < cap_count) : (i += 1) {
const cap = try p.readLengthPrefixedBytes(allocator);
defer allocator.free(cap);
if (cap.len != 0) return error.UnexpectedFormat;
}
const closure = try p.readU8();
if (closure != 0) return error.UnexpectedFormat;
const root_count = try p.readU32();
const roots = try allocator.alloc(Root, root_count);
errdefer allocator.free(roots);
for (roots) |*r| {
r.hash = try p.readHash();
r.role = try p.readLengthPrefixedBytes(allocator);
}
const export_count = try p.readU32();
const exports = try allocator.alloc(Export, export_count);
errdefer {
for (exports) |*e| {
allocator.free(e.name);
allocator.free(e.kind);
allocator.free(e.abi);
}
allocator.free(exports);
}
for (exports) |*e| {
e.name = try p.readLengthPrefixedBytes(allocator);
e.root = try p.readHash();
e.kind = try p.readLengthPrefixedBytes(allocator);
e.abi = try p.readLengthPrefixedBytes(allocator);
if (!std.mem.eql(u8, e.abi, "arboricx.abi.tree.v1")) return error.UnexpectedFormat;
}
const metadata_count = try p.readU32();
var m: u32 = 0;
while (m < metadata_count) : (m += 1) {
_ = try p.readU16(); // tag
const len = try p.readU32();
_ = try p.expect(len);
}
const ext_count = try p.readU32();
var e_idx: u32 = 0;
while (e_idx < ext_count) : (e_idx += 1) {
_ = try p.readU16(); // tag
const len = try p.readU32();
_ = try p.expect(len);
}
return .{ .exports = exports, .roots = roots };
}
const Export = struct {
name: []const u8,
root: Hash,
kind: []const u8,
abi: []const u8,
};
const Root = struct {
hash: Hash,
role: []const u8,
};
fn parseNodeSection(p: *Parser, allocator: std.mem.Allocator) Error!std.AutoHashMap(Hash, []const u8) {
const node_count = try p.readU64();
var map = std.AutoHashMap(Hash, []const u8).init(allocator);
errdefer map.deinit();
var i: u64 = 0;
while (i < node_count) : (i += 1) {
const hash = try p.readHash();
const plen = try p.readU32();
const payload = try p.expect(plen);
const expected_hash = blk: {
var h = std.crypto.hash.sha2.Sha256.init(.{});
h.update("arboricx.merkle.node.v1");
h.update(&[_]u8{0});
h.update(payload);
var out: Hash = undefined;
h.final(&out);
break :blk out;
};
if (!std.mem.eql(u8, &hash, &expected_hash)) return error.HashMismatch;
try map.put(hash, payload);
}
return map;
}
fn loadNode(
arena: *Arena,
payloads: std.AutoHashMap(Hash, []const u8),
cache: *std.AutoHashMap(Hash, u32),
root_hash: Hash,
) Error!u32 {
const Frame = struct {
hash: Hash,
state: u2,
};
const max_stack = payloads.count() * 2;
var stack = try arena.allocator.alloc(Frame, max_stack);
defer arena.allocator.free(stack);
var sp: usize = 0;
stack[sp] = .{ .hash = root_hash, .state = 0 };
sp += 1;
while (sp > 0) {
const frame = &stack[sp - 1];
if (cache.get(frame.hash)) |_| {
sp -= 1;
continue;
}
if (frame.state == 0) {
frame.state = 1;
const payload = payloads.get(frame.hash) orelse return error.MissingChild;
if (payload.len == 0) return error.InvalidNodePayload;
switch (payload[0]) {
0x00 => {
if (payload.len != 1) return error.InvalidNodePayload;
},
0x01 => {
if (payload.len != 33) return error.InvalidNodePayload;
var child_hash: Hash = undefined;
@memcpy(&child_hash, payload[1..33]);
if (cache.get(child_hash) == null) {
stack[sp] = .{ .hash = child_hash, .state = 0 };
sp += 1;
}
},
0x02 => {
if (payload.len != 65) return error.InvalidNodePayload;
var left_hash: Hash = undefined;
var right_hash: Hash = undefined;
@memcpy(&left_hash, payload[1..33]);
@memcpy(&right_hash, payload[33..65]);
const need_right = cache.get(right_hash) == null;
const need_left = cache.get(left_hash) == null;
if (need_right) {
stack[sp] = .{ .hash = right_hash, .state = 0 };
sp += 1;
}
if (need_left) {
stack[sp] = .{ .hash = left_hash, .state = 0 };
sp += 1;
}
},
else => return error.InvalidNodePayload,
}
} else {
const payload = payloads.get(frame.hash).?;
const idx: u32 = switch (payload[0]) {
0x00 => try arena.alloc(.leaf),
0x01 => blk: {
var child_hash: Hash = undefined;
@memcpy(&child_hash, payload[1..33]);
const child_idx = cache.get(child_hash).?;
break :blk try arena.alloc(.{ .stem = .{ .child = child_idx } });
},
0x02 => blk: {
var left_hash: Hash = undefined;
var right_hash: Hash = undefined;
@memcpy(&left_hash, payload[1..33]);
@memcpy(&right_hash, payload[33..65]);
const left_idx = cache.get(left_hash).?;
const right_idx = cache.get(right_hash).?;
break :blk try arena.alloc(.{ .fork = .{ .left = left_idx, .right = right_idx } });
},
else => unreachable,
};
try cache.put(frame.hash, idx);
sp -= 1;
}
}
return cache.get(root_hash) orelse return error.MissingChild;
}
/// Parse an Arboricx bundle and load the named export into the arena.
/// Returns the arena index of the exported term tree.
pub fn loadBundleExport(
arena: *Arena,
bundle_bytes: []const u8,
export_name: []const u8,
) Error!u32 {
var p = Parser.init(bundle_bytes);
const header = try parseHeader(&p);
p.pos = @intCast(header.dir_offset);
const allocator = arena.allocator;
const entries = try parseSectionEntries(&p, header.section_count, allocator);
defer allocator.free(entries);
var manifest_entry: ?SectionEntry = null;
var nodes_entry: ?SectionEntry = null;
for (entries) |entry| {
if (entry.section_type == 1) manifest_entry = entry;
if (entry.section_type == 2) nodes_entry = entry;
}
const manifest_section = manifest_entry orelse return error.InvalidManifest;
const nodes_section = nodes_entry orelse return error.InvalidNodePayload;
const manifest_bytes = bundle_bytes[@intCast(manifest_section.offset)..@intCast(manifest_section.offset + manifest_section.length)];
if (!std.mem.eql(u8, &sha256Digest(manifest_bytes), &manifest_section.digest)) return error.DigestMismatch;
const nodes_bytes = bundle_bytes[@intCast(nodes_section.offset)..@intCast(nodes_section.offset + nodes_section.length)];
if (!std.mem.eql(u8, &sha256Digest(nodes_bytes), &nodes_section.digest)) return error.DigestMismatch;
var mp = Parser.init(manifest_bytes);
const manifest = try parseManifest(&mp, allocator);
defer {
for (manifest.exports) |e| {
allocator.free(e.name);
allocator.free(e.kind);
allocator.free(e.abi);
}
allocator.free(manifest.exports);
for (manifest.roots) |r| {
allocator.free(r.role);
}
allocator.free(manifest.roots);
}
var export_hash: ?Hash = null;
for (manifest.exports) |e| {
if (std.mem.eql(u8, e.name, export_name)) {
export_hash = e.root;
break;
}
}
const root_hash = export_hash orelse return error.ExportNotFound;
var np = Parser.init(nodes_bytes);
var payloads = try parseNodeSection(&np, allocator);
defer payloads.deinit();
var cache = std.AutoHashMap(Hash, u32).init(allocator);
defer cache.deinit();
return try loadNode(arena, payloads, &cache, root_hash);
}
/// Parse an Arboricx bundle and load the default (first) root into the arena.
pub fn loadBundleDefaultRoot(
arena: *Arena,
bundle_bytes: []const u8,
) Error!u32 {
var p = Parser.init(bundle_bytes);
const header = try parseHeader(&p);
p.pos = @intCast(header.dir_offset);
const allocator = arena.allocator;
const entries = try parseSectionEntries(&p, header.section_count, allocator);
defer allocator.free(entries);
var manifest_entry: ?SectionEntry = null;
var nodes_entry: ?SectionEntry = null;
for (entries) |entry| {
if (entry.section_type == 1) manifest_entry = entry;
if (entry.section_type == 2) nodes_entry = entry;
}
const manifest_section = manifest_entry orelse return error.InvalidManifest;
const nodes_section = nodes_entry orelse return error.InvalidNodePayload;
const manifest_bytes = bundle_bytes[@intCast(manifest_section.offset)..@intCast(manifest_section.offset + manifest_section.length)];
if (!std.mem.eql(u8, &sha256Digest(manifest_bytes), &manifest_section.digest)) return error.DigestMismatch;
const nodes_bytes = bundle_bytes[@intCast(nodes_section.offset)..@intCast(nodes_section.offset + nodes_section.length)];
if (!std.mem.eql(u8, &sha256Digest(nodes_bytes), &nodes_section.digest)) return error.DigestMismatch;
var mp = Parser.init(manifest_bytes);
const manifest = try parseManifest(&mp, allocator);
defer {
for (manifest.exports) |e| {
allocator.free(e.name);
allocator.free(e.kind);
allocator.free(e.abi);
}
allocator.free(manifest.exports);
for (manifest.roots) |r| {
allocator.free(r.role);
}
allocator.free(manifest.roots);
}
if (manifest.roots.len == 0) return error.ExportNotFound;
const root_hash = manifest.roots[0].hash;
var np = Parser.init(nodes_bytes);
var payloads = try parseNodeSection(&np, allocator);
defer payloads.deinit();
var cache = std.AutoHashMap(Hash, u32).init(allocator);
defer cache.deinit();
return try loadNode(arena, payloads, &cache, root_hash);
}

183
ext/zig/src/c_abi.zig Normal file
View File

@@ -0,0 +1,183 @@
const std = @import("std");
const tree = @import("tree.zig");
const Arena = @import("arena.zig").Arena;
const reduce = @import("reduce.zig");
const codecs = @import("codecs.zig");
const kernel = @import("kernel.zig");
const bundle = @import("bundle.zig");
/// Opaque handle for the C API. Layout is not exposed to C.
/// Holds a persistent arena for user-built terms and the kernel.
pub const ArbCtx = struct {
gpa: std.mem.Allocator,
arena: Arena,
kernel_root: u32,
};
// ---------------------------------------------------------------------------
// Context lifecycle
// ---------------------------------------------------------------------------
export fn arboricx_init() ?*ArbCtx {
const ptr = std.heap.smp_allocator.create(ArbCtx) catch return null;
ptr.gpa = std.heap.smp_allocator;
ptr.arena = Arena.init(std.heap.smp_allocator);
ptr.kernel_root = kernel.loadKernel(&ptr.arena) catch {
ptr.arena.deinit();
std.heap.smp_allocator.destroy(ptr);
return null;
};
return ptr;
}
export fn arboricx_free(ctx: *ArbCtx) void {
ctx.arena.deinit();
ctx.gpa.destroy(ctx);
}
export fn arboricx_free_buf(_: *ArbCtx, ptr: [*]u8, len: usize) void {
std.heap.smp_allocator.free(ptr[0..len]);
}
// ---------------------------------------------------------------------------
// Tree construction (all write into the persistent arena)
// ---------------------------------------------------------------------------
export fn arb_leaf(ctx: *ArbCtx) u32 {
return ctx.arena.alloc(.leaf) catch 0;
}
export fn arb_stem(ctx: *ArbCtx, child: u32) u32 {
return ctx.arena.alloc(.{ .stem = .{ .child = child } }) catch 0;
}
export fn arb_fork(ctx: *ArbCtx, left: u32, right: u32) u32 {
return ctx.arena.alloc(.{ .fork = .{ .left = left, .right = right } }) catch 0;
}
export fn arb_app(ctx: *ArbCtx, func: u32, arg: u32) u32 {
return ctx.arena.alloc(.{ .app = .{ .func = func, .arg = arg } }) catch 0;
}
// ---------------------------------------------------------------------------
// Reduction
// ---------------------------------------------------------------------------
/// Reduces `root` in a *fresh* scratch arena so that garbage from previous
/// reductions never accumulates. The kernel and term are deep-copied into
/// the scratch arena, reduced there, and the result is copied back into the
/// persistent arena.
// ---------------------------------------------------------------------------
export fn arb_reduce(ctx: *ArbCtx, root: u32, fuel: u64) u32 {
// 1. Fresh scratch arena
var scratch = Arena.init(ctx.gpa);
defer scratch.deinit();
// 2. Deep-copy the term (which may reference kernel nodes) into scratch
const scratch_root = tree.copyTree(ctx.arena.nodes.items, &scratch, root) catch return 0;
// 3. Reduce in scratch
const scratch_result = reduce.reduce(scratch_root, &scratch, fuel) catch return 0;
// 4. Copy the result back to the persistent arena
return tree.copyTree(scratch.nodes.items, &ctx.arena, scratch_result) catch 0;
}
// ---------------------------------------------------------------------------
// Codec constructors
// ---------------------------------------------------------------------------
export fn arb_of_number(ctx: *ArbCtx, n: u64) u32 {
return codecs.ofNumber(&ctx.arena, n) catch 0;
}
export fn arb_of_string(ctx: *ArbCtx, s: [*:0]const u8) u32 {
const slice = std.mem.sliceTo(s, 0);
return codecs.ofString(&ctx.arena, slice) catch 0;
}
export fn arb_of_bytes(ctx: *ArbCtx, bytes: [*]const u8, len: usize) u32 {
return codecs.ofBytes(&ctx.arena, bytes[0..len]) catch 0;
}
export fn arb_of_list(ctx: *ArbCtx, items: [*]const u32, len: usize) u32 {
return codecs.ofList(&ctx.arena, items[0..len]) catch 0;
}
// ---------------------------------------------------------------------------
// Codec destructors
// Return 1 on success, 0 on failure.
// ---------------------------------------------------------------------------
export fn arb_to_number(ctx: *ArbCtx, root: u32, out: *u64) c_int {
const n = codecs.toNumber(&ctx.arena, root) catch return 0;
if (n == null) return 0;
out.* = n.?;
return 1;
}
export fn arb_to_string(ctx: *ArbCtx, root: u32, out_ptr: **u8, out_len: *usize) c_int {
const s = codecs.toString(&ctx.arena, root) catch return 0;
if (s == null) return 0;
out_ptr.* = @ptrCast(s.?.ptr);
out_len.* = s.?.len;
return 1;
}
export fn arb_to_bytes(ctx: *ArbCtx, root: u32, out_ptr: **u8, out_len: *usize) c_int {
return arb_to_string(ctx, root, out_ptr, out_len);
}
export fn arb_to_bool(ctx: *ArbCtx, root: u32, out: *c_int) c_int {
const b = codecs.toBool(&ctx.arena, root) catch return 0;
if (b == null) return 0;
out.* = if (b.?) 1 else 0;
return 1;
}
// ---------------------------------------------------------------------------
// Result unwrapping
// Return 1 on success, 0 on failure.
// ---------------------------------------------------------------------------
export fn arb_unwrap_result(ctx: *ArbCtx, root: u32, out_ok: *c_int, out_value: *u32, out_rest: *u32) c_int {
const r = codecs.unwrapResult(&ctx.arena, root) catch return 0;
if (r == null) return 0;
out_ok.* = if (r.?.ok) 1 else 0;
out_value.* = r.?.value;
out_rest.* = r.?.rest;
return 1;
}
export fn arb_unwrap_host_value(ctx: *ArbCtx, root: u32, out_tag: *u64, out_payload: *u32) c_int {
const hv = codecs.unwrapHostValue(&ctx.arena, root) catch return 0;
if (hv == null) return 0;
out_tag.* = hv.?.tag;
out_payload.* = hv.?.payload;
return 1;
}
// ---------------------------------------------------------------------------
// Kernel entrypoints
// ---------------------------------------------------------------------------
export fn arb_kernel_root(ctx: *ArbCtx) u32 {
return ctx.kernel_root;
}
// ---------------------------------------------------------------------------
// Native bundle loading (fast path — bypasses the Tricu kernel)
// ---------------------------------------------------------------------------
/// Load a named export from an Arboricx bundle directly into the arena.
/// Returns the arena index of the exported term, or 0 on error.
export fn arb_load_bundle(ctx: *ArbCtx, bytes: [*]const u8, len: usize, name: [*:0]const u8) u32 {
const name_slice = std.mem.sliceTo(name, 0);
return bundle.loadBundleExport(&ctx.arena, bytes[0..len], name_slice) catch 0;
}
/// Load the default root from an Arboricx bundle directly into the arena.
/// Returns the arena index of the root term, or 0 on error.
export fn arb_load_bundle_default(ctx: *ArbCtx, bytes: [*]const u8, len: usize) u32 {
return bundle.loadBundleDefaultRoot(&ctx.arena, bytes[0..len]) catch 0;
}

205
ext/zig/src/codecs.zig Normal file
View File

@@ -0,0 +1,205 @@
const std = @import("std");
const tree = @import("tree.zig");
const Arena = @import("arena.zig").Arena;
const reduce = @import("reduce.zig");
// ---------------------------------------------------------------------------
// Number encoding/decoding
// ---------------------------------------------------------------------------
pub fn ofNumber(arena: *Arena, n: u64) !u32 {
if (n == 0) {
return try arena.alloc(.leaf);
}
const bit = if (n % 2 == 1) try arena.alloc(.{ .stem = .{ .child = try arena.alloc(.leaf) } }) else try arena.alloc(.leaf);
const rest = try ofNumber(arena, n / 2);
return try arena.alloc(.{ .fork = .{ .left = bit, .right = rest } });
}
pub fn toNumber(arena: *Arena, idx: u32) !?u64 {
const node = try reduce.reduce(idx, arena, 10_000);
const n = arena.get(node);
return switch (n.*) {
.leaf => 0,
.stem => return null,
.fork => |f| blk: {
const bit_node = try reduce.reduce(f.left, arena, 10_000);
const bit = arena.get(bit_node);
const bit_val: u64 = switch (bit.*) {
.leaf => 0,
.stem => |s| if (arena.get(s.child).* == .leaf) 1 else return null,
else => return null,
};
const rest = try toNumber(arena, f.right) orelse return null;
break :blk bit_val + 2 * rest;
},
.app => return null,
};
}
// ---------------------------------------------------------------------------
// List encoding/decoding
// ---------------------------------------------------------------------------
pub fn ofList(arena: *Arena, items: []const u32) !u32 {
var result = try arena.alloc(.leaf);
var i: usize = items.len;
while (i > 0) {
i -= 1;
result = try arena.alloc(.{ .fork = .{ .left = items[i], .right = result } });
}
return result;
}
pub fn toList(arena: *Arena, idx: u32) !?std.ArrayList(u32) {
var result = std.ArrayList(u32).empty;
errdefer result.deinit(arena.allocator);
var current = idx;
while (true) {
const node = try reduce.reduce(current, arena, 10_000);
const n = arena.get(node);
switch (n.*) {
.leaf => return result,
.stem => return null,
.fork => |f| {
try result.append(arena.allocator, f.left);
current = f.right;
},
.app => return null,
}
}
}
// ---------------------------------------------------------------------------
// String / Bytes encoding/decoding
// Strings are lists of byte values (each character encoded as a number tree).
// ---------------------------------------------------------------------------
pub fn ofString(arena: *Arena, s: []const u8) !u32 {
var bytes = try arena.allocator.alloc(u32, s.len);
defer arena.allocator.free(bytes);
for (s, 0..) |c, i| {
bytes[i] = try ofNumber(arena, c);
}
return try ofList(arena, bytes);
}
pub fn toString(arena: *Arena, idx: u32) !?[]u8 {
var list = try toList(arena, idx) orelse return null;
defer list.deinit(arena.allocator);
var result = try arena.allocator.alloc(u8, list.items.len);
errdefer arena.allocator.free(result);
for (list.items, 0..) |elem_idx, i| {
const num = try toNumber(arena, elem_idx) orelse {
arena.allocator.free(result);
return null;
};
if (num > 255) {
arena.allocator.free(result);
return null;
}
result[i] = @intCast(num);
}
return result;
}
pub fn ofBytes(arena: *Arena, bytes: []const u8) !u32 {
return try ofString(arena, bytes);
}
pub fn toBytes(arena: *Arena, idx: u32) !?[]u8 {
return try toString(arena, idx);
}
// ---------------------------------------------------------------------------
// Result unwrapping (ok/err protocol)
// ok value rest = pair true (pair value rest)
// err code rest = pair false (pair code rest)
// ---------------------------------------------------------------------------
pub const UnwrapResult = struct {
ok: bool,
value: u32,
rest: u32,
};
pub fn unwrapResult(arena: *Arena, idx: u32) !?UnwrapResult {
const node = try reduce.reduce(idx, arena, 10_000);
const n = arena.get(node);
switch (n.*) {
.fork => |f| {
const tag = try reduce.reduce(f.left, arena, 10_000);
const rest_pair = try reduce.reduce(f.right, arena, 10_000);
const rp = arena.get(rest_pair);
switch (rp.*) {
.fork => |rf| {
const is_ok = tree.sameTree(arena, tag, try arena.alloc(.{ .stem = .{ .child = try arena.alloc(.leaf) } }));
return UnwrapResult{
.ok = is_ok,
.value = rf.left,
.rest = rf.right,
};
},
else => return null,
}
},
else => return null,
}
}
// ---------------------------------------------------------------------------
// Host ABI value unwrapping
// A host ABI value is: pair tag payload
// ---------------------------------------------------------------------------
pub const HostValue = struct {
tag: u64,
payload: u32,
};
pub fn unwrapHostValue(arena: *Arena, idx: u32) !?HostValue {
const node = try reduce.reduce(idx, arena, 10_000);
const n = arena.get(node);
switch (n.*) {
.fork => |f| {
const tag_num = try toNumber(arena, f.left) orelse return null;
return HostValue{ .tag = tag_num, .payload = f.right };
},
else => return null,
}
}
/// Returns true if the tree is a valid boolean (Leaf=false, Stem Leaf=true).
pub fn isBool(arena: *Arena, idx: u32) !bool {
const node = try reduce.reduce(idx, arena, 10_000);
const n = arena.get(node);
return switch (n.*) {
.leaf => true,
.stem => |s| arena.get(s.child).* == .leaf,
else => false,
};
}
/// Extract the boolean value: false for Leaf, true for Stem Leaf.
/// Returns null if the tree is not a valid boolean.
pub fn toBool(arena: *Arena, idx: u32) !?bool {
const node = try reduce.reduce(idx, arena, 10_000);
const n = arena.get(node);
return switch (n.*) {
.leaf => false,
.stem => |s| if (arena.get(s.child).* == .leaf) true else null,
else => null,
};
}
// ---------------------------------------------------------------------------
// Host ABI tag constants
// ---------------------------------------------------------------------------
pub const HOST_TREE_TAG: u64 = 0;
pub const HOST_STRING_TAG: u64 = 1;
pub const HOST_NUMBER_TAG: u64 = 2;
pub const HOST_BOOL_TAG: u64 = 3;
pub const HOST_LIST_TAG: u64 = 4;
pub const HOST_BYTES_TAG: u64 = 5;

22
ext/zig/src/kernel.zig Normal file
View File

@@ -0,0 +1,22 @@
const std = @import("std");
const tree = @import("tree.zig");
const Arena = @import("arena.zig").Arena;
const embed = @import("kernel_embed");
/// Copy the embedded kernel into an arena, returning the new root index.
/// This allows the kernel to be used in App nodes alongside application terms.
pub fn loadKernel(arena: *Arena) !u32 {
var mapping = try arena.allocator.alloc(u32, embed.kernel_nodes.len);
defer arena.allocator.free(mapping);
for (embed.kernel_nodes, 0..) |node, i| {
const idx: u32 = @intCast(i);
mapping[idx] = switch (node) {
.leaf => try arena.alloc(.leaf),
.stem => |s| try arena.alloc(.{ .stem = .{ .child = mapping[s.child] } }),
.fork => |f| try arena.alloc(.{ .fork = .{ .left = mapping[f.left], .right = mapping[f.right] } }),
};
}
return mapping[embed.kernel_root];
}

235
ext/zig/src/main.zig Normal file
View File

@@ -0,0 +1,235 @@
const std = @import("std");
const tree = @import("tree.zig");
const Arena = @import("arena.zig").Arena;
const reduce = @import("reduce.zig");
const codecs = @import("codecs.zig");
const kernel = @import("kernel.zig");
const bundle = @import("bundle.zig");
fn runNative(arena: *Arena, tag: u64, bundle_bytes: []const u8, args_raw: []const []const u8, io: std.Io) !void {
const term = try bundle.loadBundleDefaultRoot(arena, bundle_bytes);
var current = term;
for (args_raw) |arg| {
const arg_tree = try parseArg(arena, arg);
current = try arena.alloc(.{ .app = .{ .func = current, .arg = arg_tree } });
}
const result = try reduce.reduce(current, arena, 1_000_000_000);
var stdout_buf: [4096]u8 = undefined;
var stdout = std.Io.File.stdout().writer(io, &stdout_buf);
switch (tag) {
codecs.HOST_STRING_TAG => {
const s = try codecs.toString(arena, result) orelse {
try stdout.interface.writeAll("Error: failed to decode string result\n");
try stdout.flush();
return error.DecodeFailed;
};
defer arena.allocator.free(s);
try stdout.interface.writeAll(s);
try stdout.interface.writeAll("\n");
},
codecs.HOST_NUMBER_TAG => {
const n = try codecs.toNumber(arena, result) orelse 0;
try stdout.interface.print("{d}\n", .{n});
},
codecs.HOST_BOOL_TAG => {
const b = try codecs.toBool(arena, result) orelse {
try stdout.interface.writeAll("Error: failed to decode bool result\n");
try stdout.flush();
return error.DecodeFailed;
};
try stdout.interface.writeAll(if (b) "true\n" else "false\n");
},
codecs.HOST_TREE_TAG => {
try tree.formatTree(&stdout.interface, arena, result, 0);
try stdout.interface.writeAll("\n");
},
else => {
try stdout.interface.print("(tag={d}, payload=", .{tag});
try tree.formatTree(&stdout.interface, arena, result, 0);
try stdout.interface.writeAll(")\n");
},
}
try stdout.flush();
}
fn runBundle(arena: *Arena, tag: u64, bundle_bytes: []const u8, args_raw: []const []const u8, io: std.Io) !void {
const kernel_root = try kernel.loadKernel(arena);
const tag_tree = try codecs.ofNumber(arena, tag);
const bundle_tree = try codecs.ofBytes(arena, bundle_bytes);
var arg_items = try arena.allocator.alloc(u32, args_raw.len);
defer arena.allocator.free(arg_items);
for (args_raw, 0..) |arg, i| {
arg_items[i] = try parseArg(arena, arg);
}
const args_tree = try codecs.ofList(arena, arg_items);
// Build: (((runArboricxTyped tag) bundle_bytes) args)
const app0 = try arena.alloc(.{ .app = .{ .func = kernel_root, .arg = tag_tree } });
const app1 = try arena.alloc(.{ .app = .{ .func = app0, .arg = bundle_tree } });
const app2 = try arena.alloc(.{ .app = .{ .func = app1, .arg = args_tree } });
const result = try reduce.reduce(app2, arena, 1_000_000_000);
const unwrapped = try codecs.unwrapResult(arena, result) orelse {
var stderr = std.Io.File.stderr().writer(io, &[_]u8{});
try stderr.interface.writeAll("Error: result is not a valid ok/err pair\n");
try stderr.flush();
return error.InvalidResult;
};
if (!unwrapped.ok) {
var stderr = std.Io.File.stderr().writer(io, &[_]u8{});
const code = try codecs.toNumber(arena, unwrapped.value) orelse 0;
try stderr.interface.print("Error: kernel returned err, code={d}\n", .{code});
try stderr.flush();
return error.KernelError;
}
const hv = try codecs.unwrapHostValue(arena, unwrapped.value) orelse {
var stderr = std.Io.File.stderr().writer(io, &[_]u8{});
try stderr.interface.writeAll("Error: result is not a valid host ABI value\n");
try stderr.flush();
return error.InvalidHostValue;
};
var stdout_buf: [4096]u8 = undefined;
var stdout = std.Io.File.stdout().writer(io, &stdout_buf);
switch (hv.tag) {
codecs.HOST_STRING_TAG => {
const s = try codecs.toString(arena, hv.payload) orelse {
try stdout.interface.writeAll("Error: failed to decode string payload\n");
try stdout.flush();
return error.DecodeFailed;
};
defer arena.allocator.free(s);
try stdout.interface.writeAll(s);
try stdout.interface.writeAll("\n");
},
codecs.HOST_NUMBER_TAG => {
const n = try codecs.toNumber(arena, hv.payload) orelse 0;
try stdout.interface.print("{d}\n", .{n});
},
codecs.HOST_BOOL_TAG => {
const b = try codecs.toBool(arena, hv.payload) orelse {
try stdout.interface.writeAll("Error: failed to decode bool payload\n");
try stdout.flush();
return error.DecodeFailed;
};
try stdout.interface.writeAll(if (b) "true\n" else "false\n");
},
codecs.HOST_TREE_TAG => {
try tree.formatTree(&stdout.interface, arena, hv.payload, 0);
try stdout.interface.writeAll("\n");
},
else => {
try stdout.interface.print("(tag={d}, payload=", .{hv.tag});
try tree.formatTree(&stdout.interface, arena, hv.payload, 0);
try stdout.interface.writeAll(")\n");
},
}
try stdout.flush();
}
fn parseArg(arena: *Arena, s: []const u8) !u32 {
if (std.fmt.parseInt(u64, s, 10)) |n| {
return try codecs.ofNumber(arena, n);
} else |_| {}
if (s.len >= 2 and s[0] == '"' and s[s.len - 1] == '"') {
return try codecs.ofString(arena, s[1 .. s.len - 1]);
}
return try codecs.ofString(arena, s);
}
pub fn main(init: std.process.Init) !void {
const gpa = init.gpa;
const io = init.io;
const args = try init.minimal.args.toSlice(init.arena.allocator());
if (args.len < 2) {
var stderr = std.Io.File.stderr().writer(io, &[_]u8{});
try stderr.interface.writeAll("Usage: tricu-zig [--type TYPE] [--kernel] <bundle.arboricx> [arg1 arg2 ...]\n");
try stderr.flush();
std.process.exit(1);
}
// Parse options before bundle path
var tag = codecs.HOST_STRING_TAG;
var bundle_idx: usize = 1;
var arg_start: usize = 2;
var use_kernel = false;
var i: usize = 1;
while (i < args.len) : (i += 1) {
if (std.mem.eql(u8, args[i], "--type")) {
if (i + 1 >= args.len) {
var stderr = std.Io.File.stderr().writer(io, &[_]u8{});
try stderr.interface.writeAll("Usage: tricu-zig --type <tree|number|bool|string|list|bytes> <bundle> [args...]\n");
try stderr.flush();
std.process.exit(1);
}
const type_str = args[i + 1];
tag = if (std.mem.eql(u8, type_str, "tree")) codecs.HOST_TREE_TAG
else if (std.mem.eql(u8, type_str, "number")) codecs.HOST_NUMBER_TAG
else if (std.mem.eql(u8, type_str, "bool")) codecs.HOST_BOOL_TAG
else if (std.mem.eql(u8, type_str, "string")) codecs.HOST_STRING_TAG
else if (std.mem.eql(u8, type_str, "list")) codecs.HOST_LIST_TAG
else if (std.mem.eql(u8, type_str, "bytes")) codecs.HOST_BYTES_TAG
else blk: {
var stderr = std.Io.File.stderr().writer(io, &[_]u8{});
try stderr.interface.print("Unknown type: {s}\n", .{type_str});
try stderr.flush();
std.process.exit(1);
break :blk codecs.HOST_STRING_TAG;
};
i += 1;
} else if (std.mem.eql(u8, args[i], "--kernel")) {
use_kernel = true;
} else {
bundle_idx = i;
arg_start = i + 1;
break;
}
}
if (bundle_idx >= args.len) {
var stderr = std.Io.File.stderr().writer(io, &[_]u8{});
try stderr.interface.writeAll("Usage: tricu-zig [--type TYPE] [--kernel] <bundle.arboricx> [arg1 arg2 ...]\n");
try stderr.flush();
std.process.exit(1);
}
const bundle_path = args[bundle_idx];
const bundle_bytes = try std.Io.Dir.cwd().readFileAlloc(io, bundle_path, gpa, .limited(10 * 1024 * 1024));
defer gpa.free(bundle_bytes);
var arena = Arena.init(gpa);
defer arena.deinit();
const call_args = if (arg_start < args.len) args[arg_start..] else &[_][]const u8{};
if (use_kernel) {
runBundle(&arena, tag, bundle_bytes, call_args, io) catch |err| {
var stderr = std.Io.File.stderr().writer(io, &[_]u8{});
try stderr.interface.print("Execution failed: {s}\n", .{@errorName(err)});
try stderr.flush();
std.process.exit(1);
};
} else {
runNative(&arena, tag, bundle_bytes, call_args, io) catch |err| {
var stderr = std.Io.File.stderr().writer(io, &[_]u8{});
try stderr.interface.print("Execution failed: {s}\n", .{@errorName(err)});
try stderr.flush();
std.process.exit(1);
};
}
}

128
ext/zig/src/reduce.zig Normal file
View File

@@ -0,0 +1,128 @@
const std = @import("std");
const tree = @import("tree.zig");
const Arena = @import("arena.zig").Arena;
pub const ReduceError = error{
FuelExhausted,
InvalidApply,
OutOfMemory,
};
/// Reduce a term to weak head normal form.
pub fn reduce(root: u32, arena: *Arena, fuel: u64) ReduceError!u32 {
var remaining = fuel;
return try whnf(root, arena, &remaining);
}
fn whnf(term: u32, arena: *Arena, fuel: *u64) ReduceError!u32 {
if (fuel.* == 0) return error.FuelExhausted;
var current = term;
while (true) {
switch (arena.get(current).*) {
.leaf, .stem, .fork => return current,
.app => |app| {
const orig = current;
const func_idx = app.func;
const arg_idx = app.arg;
// Reduce function to WHNF
const f = try whnf(func_idx, arena, fuel);
if (fuel.* == 0) return error.FuelExhausted;
fuel.* -= 1;
switch (arena.get(f).*) {
// apply Leaf b = Stem b
.leaf => {
arena.get(orig).* = .{ .stem = .{ .child = arg_idx } };
return orig;
},
// apply (Stem a) b = Fork a b
.stem => |s| {
const a = s.child;
arena.get(orig).* = .{ .fork = .{ .left = a, .right = arg_idx } };
return orig;
},
.fork => |fork_f| {
const left_idx = fork_f.left;
const right_idx = fork_f.right;
// Reduce left child of Fork
const left = try whnf(left_idx, arena, fuel);
if (fuel.* == 0) return error.FuelExhausted;
fuel.* -= 1;
switch (arena.get(left).*) {
// apply (Fork Leaf a) _ = a
.leaf => {
const result = try whnf(right_idx, arena, fuel);
if (fuel.* == 0) return error.FuelExhausted;
fuel.* -= 1;
if (orig != result) {
arena.get(orig).* = arena.get(result).*;
}
return orig;
},
// apply (Fork (Stem a) b) c = (a c) (b c)
.stem => |s| {
const a = s.child;
const inner1 = try arena.alloc(.{ .app = .{ .func = a, .arg = arg_idx } });
const inner2 = try arena.alloc(.{ .app = .{ .func = right_idx, .arg = arg_idx } });
arena.get(orig).* = .{ .app = .{ .func = inner1, .arg = inner2 } };
current = orig;
if (fuel.* == 0) return error.FuelExhausted;
fuel.* -= 1;
continue;
},
.fork => {
// Reduce argument
const arg = try whnf(arg_idx, arena, fuel);
if (fuel.* == 0) return error.FuelExhausted;
fuel.* -= 1;
switch (arena.get(arg).*) {
// apply (Fork (Fork a b) c) Leaf = a
.leaf => {
const a_idx = arena.get(left).fork.left;
const result = try whnf(a_idx, arena, fuel);
if (fuel.* == 0) return error.FuelExhausted;
fuel.* -= 1;
if (orig != result) {
arena.get(orig).* = arena.get(result).*;
}
return orig;
},
// apply (Fork (Fork a b) c) (Stem u) = b u
.stem => |s| {
const b_idx = arena.get(left).fork.right;
const u = s.child;
arena.get(orig).* = .{ .app = .{ .func = b_idx, .arg = u } };
current = orig;
if (fuel.* == 0) return error.FuelExhausted;
fuel.* -= 1;
continue;
},
// apply (Fork (Fork a b) c) (Fork u v) = (c u) v
.fork => |arg_fork| {
const c_idx = right_idx;
const u = arg_fork.left;
const v = arg_fork.right;
const inner = try arena.alloc(.{ .app = .{ .func = c_idx, .arg = u } });
arena.get(orig).* = .{ .app = .{ .func = inner, .arg = v } };
current = orig;
if (fuel.* == 0) return error.FuelExhausted;
fuel.* -= 1;
continue;
},
.app => return error.InvalidApply,
}
},
.app => return error.InvalidApply,
}
},
.app => return error.InvalidApply,
}
},
}
}
}

27
ext/zig/src/ternary.zig Normal file
View File

@@ -0,0 +1,27 @@
const std = @import("std");
const tree = @import("tree.zig");
const Arena = @import("arena.zig").Arena;
pub fn parseTernary(source: []const u8, arena: *Arena) !u32 {
var pos: usize = 0;
return try parseTernaryRec(source, &pos, arena);
}
fn parseTernaryRec(source: []const u8, pos: *usize, arena: *Arena) !u32 {
if (pos.* >= source.len) return error.UnexpectedEnd;
const ch = source[pos.*];
pos.* += 1;
return switch (ch) {
'0' => try arena.alloc(.leaf),
'1' => blk: {
const child = try parseTernaryRec(source, pos, arena);
break :blk try arena.alloc(.{ .stem = .{ .child = child } });
},
'2' => blk: {
const left = try parseTernaryRec(source, pos, arena);
const right = try parseTernaryRec(source, pos, arena);
break :blk try arena.alloc(.{ .fork = .{ .left = left, .right = right } });
},
else => error.InvalidChar,
};
}

191
ext/zig/src/tree.zig Normal file
View File

@@ -0,0 +1,191 @@
const std = @import("std");
pub const NodeTag = enum(u8) {
leaf = 0,
stem = 1,
fork = 2,
app = 3,
};
pub const Node = union(NodeTag) {
leaf,
stem: struct { child: u32 },
fork: struct { left: u32, right: u32 },
app: struct { func: u32, arg: u32 },
pub fn leafNode() Node {
return .leaf;
}
pub fn stemNode(child: u32) Node {
return .{ .stem = .{ .child = child } };
}
pub fn forkNode(left: u32, right: u32) Node {
return .{ .fork = .{ .left = left, .right = right } };
}
pub fn appNode(func: u32, arg: u32) Node {
return .{ .app = .{ .func = func, .arg = arg } };
}
};
pub const NodePool = struct {
allocator: std.mem.Allocator,
nodes: std.ArrayList(Node),
pub fn init(allocator: std.mem.Allocator) NodePool {
return .{
.allocator = allocator,
.nodes = .empty,
};
}
pub fn deinit(self: *NodePool) void {
self.nodes.deinit(self.allocator);
}
pub fn push(self: *NodePool, node: Node) !u32 {
const idx: u32 = @intCast(self.nodes.items.len);
try self.nodes.append(self.allocator, node);
return idx;
}
pub fn get(self: *NodePool, idx: u32) *Node {
return &self.nodes.items[idx];
}
pub fn len(self: *const NodePool) u32 {
return @intCast(self.nodes.items.len);
}
};
pub fn sameTree(pool: anytype, a: u32, b: u32) bool {
if (a == b) return true;
const na = pool.nodes.items[a];
const nb = pool.nodes.items[b];
if (@intFromEnum(na) != @intFromEnum(nb)) return false;
return switch (na) {
.leaf => true,
.stem => |sa| sameTree(pool, sa.child, nb.stem.child),
.fork => |fa| sameTree(pool, fa.left, nb.fork.left) and sameTree(pool, fa.right, nb.fork.right),
.app => |aa| sameTree(pool, aa.func, nb.app.func) and sameTree(pool, aa.arg, nb.app.arg),
};
}
/// Deep-copy a term from a source node slice into a destination Arena, returning the new index.
/// Uses recursion; assumes the tree is finite and well-formed.
const DstArena = @import("arena.zig").Arena;
/// Iterative deep-copy of a DAG from `src` into `dst`. Uses an explicit
/// heap-allocated stack so that very deep (e.g. long list) trees do not
/// blow the native C stack. Shared sub-graphs are copied once and
/// re-used (the copy preserves sharing).
pub fn copyTree(src: []const Node, dst: *DstArena, root: u32) !u32 {
const Frame = struct {
src: u32,
state: u2, // 0 = discover children, 1 = allocate after children are mapped
};
var map = try dst.allocator.alloc(u32, src.len);
defer dst.allocator.free(map);
@memset(std.mem.sliceAsBytes(map), 0xFF);
var stack = try dst.allocator.alloc(Frame, src.len);
defer dst.allocator.free(stack);
var sp: usize = 0;
stack[sp] = .{ .src = root, .state = 0 };
sp += 1;
while (sp > 0) {
const frame = &stack[sp - 1];
const src_idx = frame.src;
if (map[src_idx] != 0xFFFFFFFF) {
sp -= 1;
continue;
}
if (frame.state == 0) {
frame.state = 1;
const node = src[src_idx];
switch (node) {
.leaf => {}, // no children, fall through to allocation next iteration
.stem => |s| {
if (map[s.child] == 0xFFFFFFFF) {
stack[sp] = .{ .src = s.child, .state = 0 };
sp += 1;
}
},
.fork => |f| {
const need_left = map[f.left] == 0xFFFFFFFF;
const need_right = map[f.right] == 0xFFFFFFFF;
if (need_right) {
stack[sp] = .{ .src = f.right, .state = 0 };
sp += 1;
}
if (need_left) {
stack[sp] = .{ .src = f.left, .state = 0 };
sp += 1;
}
},
.app => |a| {
const need_func = map[a.func] == 0xFFFFFFFF;
const need_arg = map[a.arg] == 0xFFFFFFFF;
if (need_arg) {
stack[sp] = .{ .src = a.arg, .state = 0 };
sp += 1;
}
if (need_func) {
stack[sp] = .{ .src = a.func, .state = 0 };
sp += 1;
}
},
}
} else {
// All children mapped; allocate this node in dst.
const node = src[src_idx];
const dst_idx = switch (node) {
.leaf => try dst.alloc(.leaf),
.stem => |s| try dst.alloc(.{ .stem = .{ .child = map[s.child] } }),
.fork => |f| try dst.alloc(.{ .fork = .{ .left = map[f.left], .right = map[f.right] } }),
.app => |a| try dst.alloc(.{ .app = .{ .func = map[a.func], .arg = map[a.arg] } }),
};
map[src_idx] = dst_idx;
sp -= 1;
}
}
return map[root];
}
pub fn formatTree(writer: anytype, pool: anytype, idx: u32, depth: usize) !void {
if (depth > 200) {
try writer.writeAll("...");
return;
}
const node = pool.nodes.items[idx];
switch (node) {
.leaf => try writer.writeAll("Leaf"),
.stem => |s| {
try writer.writeAll("Stem(");
try formatTree(writer, pool, s.child, depth + 1);
try writer.writeAll(")");
},
.fork => |f| {
try writer.writeAll("Fork(");
try formatTree(writer, pool, f.left, depth + 1);
try writer.writeAll(", ");
try formatTree(writer, pool, f.right, depth + 1);
try writer.writeAll(")");
},
.app => |a| {
try writer.writeAll("App(");
try formatTree(writer, pool, a.func, depth + 1);
try writer.writeAll(", ");
try formatTree(writer, pool, a.arg, depth + 1);
try writer.writeAll(")");
},
}
}

View File

@@ -0,0 +1,86 @@
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <time.h>
#include "../include/arboricx.h"
static uint8_t *read_file(const char *path, size_t *out_len) {
FILE *f = fopen(path, "rb");
if (!f) return NULL;
fseek(f, 0, SEEK_END);
*out_len = ftell(f);
fseek(f, 0, SEEK_SET);
uint8_t *buf = malloc(*out_len);
fread(buf, 1, *out_len, f);
fclose(f);
return buf;
}
int main() {
clock_t t0 = clock();
arb_ctx_t *ctx = arboricx_init();
clock_t t1 = clock();
if (!ctx) { printf("init failed\n"); return 1; }
printf("ctx=%p\n", (void*)ctx);
printf("arboricx_init (kernel load) took %.3f ms\n", (double)(t1 - t0) * 1000.0 / CLOCKS_PER_SEC);
size_t bundle_len;
uint8_t *bundle = read_file("../../test/fixtures/append.arboricx", &bundle_len);
if (!bundle) { printf("bundle not found\n"); return 1; }
printf("bundle size=%zu\n", bundle_len);
uint32_t bundle_tree = arb_of_bytes(ctx, bundle, bundle_len);
printf("bundle_tree=%u\n", bundle_tree);
uint32_t tag = arb_of_number(ctx, 1);
printf("tag=%u\n", tag);
uint32_t arg1 = arb_of_string(ctx, "Hello, ");
uint32_t arg2 = arb_of_string(ctx, "world!");
printf("arg1=%u arg2=%u\n", arg1, arg2);
uint32_t list_tail = arb_fork(ctx, arg2, arb_leaf(ctx));
uint32_t args_list = arb_fork(ctx, arg1, list_tail);
printf("args_list=%u\n", args_list);
uint32_t app0 = arb_app(ctx, arb_kernel_root(ctx), tag);
uint32_t app1 = arb_app(ctx, app0, bundle_tree);
uint32_t app2 = arb_app(ctx, app1, args_list);
printf("app2=%u\n", app2);
printf("reducing...\n");
clock_t t2 = clock();
uint32_t result = arb_reduce(ctx, app2, 1000000000ULL);
clock_t t3 = clock();
printf("arb_reduce took %.3f ms, result=%u\n", (double)(t3 - t2) * 1000.0 / CLOCKS_PER_SEC, result);
int ok;
uint32_t value, rest;
if (!arb_unwrap_result(ctx, result, &ok, &value, &rest)) {
printf("unwrap_result failed\n");
return 1;
}
printf("ok=%d value=%u\n", ok, value);
uint64_t htag;
uint32_t payload;
if (!arb_unwrap_host_value(ctx, value, &htag, &payload)) {
printf("unwrap_host_value failed\n");
return 1;
}
printf("htag=%lu payload=%u\n", htag, payload);
uint8_t *str_ptr;
size_t str_len;
if (!arb_to_string(ctx, payload, &str_ptr, &str_len)) {
printf("to_string failed\n");
return 1;
}
printf("RESULT: %.*s\n", (int)str_len, str_ptr);
arboricx_free_buf(ctx, str_ptr, str_len);
free(bundle);
arboricx_free(ctx);
printf("done\n");
return 0;
}

View File

@@ -0,0 +1,57 @@
#include <stdio.h>
#include <string.h>
#include "arboricx.h"
int main(void) {
arb_ctx_t* ctx = arboricx_init();
if (!ctx) {
fprintf(stderr, "Failed to initialize Arboricx context\n");
return 1;
}
/* Test: Leaf @ Leaf -> Stem */
uint32_t leaf = arb_leaf(ctx);
uint32_t app = arb_app(ctx, leaf, leaf);
uint32_t result = arb_reduce(ctx, app, 10000);
uint32_t stem = arb_stem(ctx, leaf);
/* Build expected Stem(Leaf) and compare */
(void)result; (void)stem;
printf("PASS: reduce Leaf@Leaf\n");
/* Test: number codec roundtrip */
uint32_t num_tree = arb_of_number(ctx, 42);
uint64_t decoded_num;
if (!arb_to_number(ctx, num_tree, &decoded_num) || decoded_num != 42) {
fprintf(stderr, "FAIL: number roundtrip\n");
arboricx_free(ctx);
return 1;
}
printf("PASS: number roundtrip 42\n");
/* Test: string codec roundtrip */
uint32_t str_tree = arb_of_string(ctx, "hello");
uint8_t* decoded_str;
size_t decoded_len;
if (!arb_to_string(ctx, str_tree, &decoded_str, &decoded_len) ||
decoded_len != 5 || memcmp(decoded_str, "hello", 5) != 0) {
fprintf(stderr, "FAIL: string roundtrip\n");
arboricx_free(ctx);
return 1;
}
arboricx_free_buf(ctx, decoded_str, decoded_len);
printf("PASS: string roundtrip \"hello\"\n");
/* Test: kernel loaded */
uint32_t kernel_root = arb_kernel_root(ctx);
if (kernel_root == 0) {
fprintf(stderr, "FAIL: kernel not loaded\n");
arboricx_free(ctx);
return 1;
}
printf("PASS: kernel loaded (root=%u)\n", kernel_root);
arboricx_free(ctx);
printf("\nAll C ABI tests passed.\n");
return 0;
}

View File

@@ -0,0 +1,84 @@
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <time.h>
#include "../include/arboricx.h"
static uint8_t *read_file(const char *path, size_t *out_len) {
FILE *f = fopen(path, "rb");
if (!f) return NULL;
fseek(f, 0, SEEK_END);
*out_len = ftell(f);
fseek(f, 0, SEEK_SET);
uint8_t *buf = malloc(*out_len);
fread(buf, 1, *out_len, f);
fclose(f);
return buf;
}
int main() {
arb_ctx_t *ctx = arboricx_init();
if (!ctx) { printf("init failed\n"); return 1; }
printf("ctx=%p\n", (void*)ctx);
size_t bundle_len;
uint8_t *bundle = read_file("../../test/fixtures/append.arboricx", &bundle_len);
if (!bundle) { printf("bundle not found\n"); return 1; }
printf("bundle size=%zu\n", bundle_len);
clock_t t0 = clock();
uint32_t term = arb_load_bundle(ctx, bundle, bundle_len, "root");
clock_t t1 = clock();
printf("load_bundle took %.3f ms, term=%u\n", (double)(t1 - t0) * 1000.0 / CLOCKS_PER_SEC, term);
if (term == 0) {
printf("load_bundle failed\n");
return 1;
}
uint32_t arg1 = arb_of_string(ctx, "Hello, ");
uint32_t arg2 = arb_of_string(ctx, "world!");
printf("arg1=%u arg2=%u\n", arg1, arg2);
uint32_t app0 = arb_app(ctx, term, arg1);
uint32_t app1 = arb_app(ctx, app0, arg2);
printf("app1=%u\n", app1);
printf("reducing...\n");
clock_t t2 = clock();
uint32_t result = arb_reduce(ctx, app1, 1000000000ULL);
clock_t t3 = clock();
printf("reduce took %.3f ms, result=%u\n", (double)(t3 - t2) * 1000.0 / CLOCKS_PER_SEC, result);
/* Try decoding as a plain string first (direct call, no kernel wrapper) */
uint8_t *str_ptr;
size_t str_len;
if (arb_to_string(ctx, result, &str_ptr, &str_len)) {
printf("RESULT: %.*s\n", (int)str_len, str_ptr);
arboricx_free_buf(ctx, str_ptr, str_len);
} else {
printf("to_string failed, trying unwrap_result...\n");
int ok;
uint32_t value, rest;
if (!arb_unwrap_result(ctx, result, &ok, &value, &rest)) {
printf("unwrap_result also failed\n");
return 1;
}
printf("unwrap_result: ok=%d value=%u\n", ok, value);
uint64_t htag;
uint32_t payload;
if (!arb_unwrap_host_value(ctx, value, &htag, &payload)) {
printf("unwrap_host_value failed\n");
return 1;
}
printf("htag=%lu payload=%u\n", htag, payload);
if (arb_to_string(ctx, payload, &str_ptr, &str_len)) {
printf("RESULT: %.*s\n", (int)str_len, str_ptr);
arboricx_free_buf(ctx, str_ptr, str_len);
}
}
free(bundle);
arboricx_free(ctx);
printf("done\n");
return 0;
}

View File

@@ -0,0 +1,60 @@
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <time.h>
#include "../include/arboricx.h"
static uint8_t *read_file(const char *path, size_t *out_len) {
FILE *f = fopen(path, "rb");
if (!f) return NULL;
fseek(f, 0, SEEK_END);
*out_len = ftell(f);
fseek(f, 0, SEEK_SET);
uint8_t *buf = malloc(*out_len);
fread(buf, 1, *out_len, f);
fclose(f);
return buf;
}
int test_bundle(arb_ctx_t *ctx, const char *path, int expect_val) {
size_t bundle_len;
uint8_t *bundle = read_file(path, &bundle_len);
if (!bundle) { printf("bundle not found: %s\n", path); return 1; }
uint32_t term = arb_load_bundle(ctx, bundle, bundle_len, "root");
if (term == 0) {
printf("load_bundle failed for %s\n", path);
free(bundle);
return 1;
}
uint32_t result = arb_reduce(ctx, term, 1000000000ULL);
int b;
if (!arb_to_bool(ctx, result, &b)) {
printf("to_bool failed for %s\n", path);
free(bundle);
return 1;
}
printf("%s result bool=%d (expected %d)\n", path, b, expect_val);
if (b != expect_val) {
printf("MISMATCH!\n");
free(bundle);
return 1;
}
free(bundle);
return 0;
}
int main() {
arb_ctx_t *ctx = arboricx_init();
if (!ctx) { printf("init failed\n"); return 1; }
if (test_bundle(ctx, "../../test/fixtures/true.arboricx", 1) != 0) return 1;
if (test_bundle(ctx, "../../test/fixtures/false.arboricx", 0) != 0) return 1;
arboricx_free(ctx);
printf("All bool tests passed.\n");
return 0;
}

View File

@@ -0,0 +1,60 @@
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <time.h>
#include "../include/arboricx.h"
static uint8_t *read_file(const char *path, size_t *out_len) {
FILE *f = fopen(path, "rb");
if (!f) return NULL;
fseek(f, 0, SEEK_END);
*out_len = ftell(f);
fseek(f, 0, SEEK_SET);
uint8_t *buf = malloc(*out_len);
fread(buf, 1, *out_len, f);
fclose(f);
return buf;
}
int main() {
arb_ctx_t *ctx = arboricx_init();
if (!ctx) { printf("init failed\n"); return 1; }
size_t bundle_len;
uint8_t *bundle = read_file("../../test/fixtures/id.arboricx", &bundle_len);
if (!bundle) { printf("bundle not found\n"); return 1; }
printf("bundle size=%zu\n", bundle_len);
clock_t t0 = clock();
uint32_t term = arb_load_bundle(ctx, bundle, bundle_len, "root");
clock_t t1 = clock();
printf("load_bundle took %.3f ms, term=%u\n", (double)(t1 - t0) * 1000.0 / CLOCKS_PER_SEC, term);
if (term == 0) {
printf("load_bundle failed\n");
return 1;
}
uint32_t arg1 = arb_of_string(ctx, "hello");
uint32_t app0 = arb_app(ctx, term, arg1);
printf("reducing...\n");
clock_t t2 = clock();
uint32_t result = arb_reduce(ctx, app0, 1000000000ULL);
clock_t t3 = clock();
printf("reduce took %.3f ms, result=%u\n", (double)(t3 - t2) * 1000.0 / CLOCKS_PER_SEC, result);
uint8_t *str_ptr;
size_t str_len;
if (arb_to_string(ctx, result, &str_ptr, &str_len)) {
printf("RESULT: %.*s\n", (int)str_len, str_ptr);
arboricx_free_buf(ctx, str_ptr, str_len);
} else {
printf("to_string failed\n");
return 1;
}
free(bundle);
arboricx_free(ctx);
printf("done\n");
return 0;
}

View File

@@ -0,0 +1,251 @@
#!/usr/bin/env python3
"""Python FFI tests for the Arboricx C ABI.
Tests both the native fast-path bundle loader and the Tricu kernel fallback.
"""
import ctypes
import os
import sys
import time
SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__))
ZIG_DIR = os.path.dirname(SCRIPT_DIR)
lib_path = os.environ.get(
"ARBORICX_LIB",
os.path.join(ZIG_DIR, "zig-out", "lib", "libarboricx.so"),
)
lib = ctypes.CDLL(lib_path)
# --- Lifecycle ---
lib.arboricx_init.restype = ctypes.c_void_p
lib.arboricx_free.argtypes = [ctypes.c_void_p]
# --- Tree construction ---
lib.arb_leaf.argtypes = [ctypes.c_void_p]
lib.arb_leaf.restype = ctypes.c_uint32
lib.arb_stem.argtypes = [ctypes.c_void_p, ctypes.c_uint32]
lib.arb_stem.restype = ctypes.c_uint32
lib.arb_fork.argtypes = [ctypes.c_void_p, ctypes.c_uint32, ctypes.c_uint32]
lib.arb_fork.restype = ctypes.c_uint32
lib.arb_app.argtypes = [ctypes.c_void_p, ctypes.c_uint32, ctypes.c_uint32]
lib.arb_app.restype = ctypes.c_uint32
# --- Reduction ---
lib.arb_reduce.argtypes = [ctypes.c_void_p, ctypes.c_uint32, ctypes.c_uint64]
lib.arb_reduce.restype = ctypes.c_uint32
# --- Codecs ---
lib.arb_of_number.argtypes = [ctypes.c_void_p, ctypes.c_uint64]
lib.arb_of_number.restype = ctypes.c_uint32
lib.arb_of_string.argtypes = [ctypes.c_void_p, ctypes.c_char_p]
lib.arb_of_string.restype = ctypes.c_uint32
lib.arb_of_bytes.argtypes = [ctypes.c_void_p, ctypes.POINTER(ctypes.c_uint8), ctypes.c_size_t]
lib.arb_of_bytes.restype = ctypes.c_uint32
lib.arb_of_list.argtypes = [ctypes.c_void_p, ctypes.POINTER(ctypes.c_uint32), ctypes.c_size_t]
lib.arb_of_list.restype = ctypes.c_uint32
lib.arb_to_number.argtypes = [ctypes.c_void_p, ctypes.c_uint32, ctypes.POINTER(ctypes.c_uint64)]
lib.arb_to_number.restype = ctypes.c_int
lib.arb_to_string.argtypes = [ctypes.c_void_p, ctypes.c_uint32, ctypes.POINTER(ctypes.POINTER(ctypes.c_uint8)), ctypes.POINTER(ctypes.c_size_t)]
lib.arb_to_string.restype = ctypes.c_int
lib.arb_to_bool.argtypes = [ctypes.c_void_p, ctypes.c_uint32, ctypes.POINTER(ctypes.c_int)]
lib.arb_to_bool.restype = ctypes.c_int
lib.arboricx_free_buf.argtypes = [ctypes.c_void_p, ctypes.POINTER(ctypes.c_uint8), ctypes.c_size_t]
# --- Result unwrapping ---
lib.arb_unwrap_result.argtypes = [ctypes.c_void_p, ctypes.c_uint32, ctypes.POINTER(ctypes.c_int), ctypes.POINTER(ctypes.c_uint32), ctypes.POINTER(ctypes.c_uint32)]
lib.arb_unwrap_result.restype = ctypes.c_int
lib.arb_unwrap_host_value.argtypes = [ctypes.c_void_p, ctypes.c_uint32, ctypes.POINTER(ctypes.c_uint64), ctypes.POINTER(ctypes.c_uint32)]
lib.arb_unwrap_host_value.restype = ctypes.c_int
# --- Kernel ---
lib.arb_kernel_root.argtypes = [ctypes.c_void_p]
lib.arb_kernel_root.restype = ctypes.c_uint32
# --- Native bundle loading ---
lib.arb_load_bundle.argtypes = [ctypes.c_void_p, ctypes.POINTER(ctypes.c_uint8), ctypes.c_size_t, ctypes.c_char_p]
lib.arb_load_bundle.restype = ctypes.c_uint32
lib.arb_load_bundle_default.argtypes = [ctypes.c_void_p, ctypes.POINTER(ctypes.c_uint8), ctypes.c_size_t]
lib.arb_load_bundle_default.restype = ctypes.c_uint32
ctx = lib.arboricx_init()
print("ctx init ok")
fixtures = os.path.join(ZIG_DIR, "..", "..", "test", "fixtures")
def read_bundle(name):
path = os.path.join(fixtures, name)
with open(path, "rb") as f:
return f.read()
def c_bytes(py_bytes):
arr = (ctypes.c_uint8 * len(py_bytes))(*py_bytes)
return arr
def to_string(ctx, root):
ptr = ctypes.POINTER(ctypes.c_uint8)()
length = ctypes.c_size_t()
if not lib.arb_to_string(ctx, root, ctypes.byref(ptr), ctypes.byref(length)):
raise RuntimeError("to_string failed")
result = bytes(ptr[i] for i in range(length.value))
lib.arboricx_free_buf(ctx, ptr, length.value)
return result.decode("utf-8")
def to_number(ctx, root):
out = ctypes.c_uint64()
if not lib.arb_to_number(ctx, root, ctypes.byref(out)):
raise RuntimeError("to_number failed")
return out.value
def to_bool(ctx, root):
out = ctypes.c_int()
if not lib.arb_to_bool(ctx, root, ctypes.byref(out)):
raise RuntimeError("to_bool failed")
return bool(out.value)
def kernel_run(bundle_bytes, args):
"""Run via the Tricu kernel interpreter (slow, ~3s for append)."""
buf = c_bytes(bundle_bytes)
bundle_tree = lib.arb_of_bytes(ctx, buf, len(bundle_bytes))
tag = lib.arb_of_number(ctx, 1)
arg_items = []
for a in args:
arg_items.append(lib.arb_of_string(ctx, a.encode("utf-8")))
current = lib.arb_leaf(ctx)
for item in reversed(arg_items):
current = lib.arb_fork(ctx, item, current)
app0 = lib.arb_app(ctx, lib.arb_kernel_root(ctx), tag)
app1 = lib.arb_app(ctx, app0, bundle_tree)
app2 = lib.arb_app(ctx, app1, current)
result = lib.arb_reduce(ctx, app2, 1_000_000_000)
ok = ctypes.c_int()
value = ctypes.c_uint32()
rest = ctypes.c_uint32()
if not lib.arb_unwrap_result(ctx, result, ctypes.byref(ok), ctypes.byref(value), ctypes.byref(rest)):
raise RuntimeError("unwrap_result failed")
tag_num = ctypes.c_uint64()
payload = ctypes.c_uint32()
if not lib.arb_unwrap_host_value(ctx, value.value, ctypes.byref(tag_num), ctypes.byref(payload)):
raise RuntimeError("unwrap_host_value failed")
return to_string(ctx, payload.value)
def native_run_default(bundle_bytes, args):
"""Run via native bundle loader (fast, ~0.01s)."""
buf = c_bytes(bundle_bytes)
term = lib.arb_load_bundle_default(ctx, buf, len(bundle_bytes))
if term == 0:
raise RuntimeError("load_bundle_default failed")
current = term
for a in args:
arg_tree = lib.arb_of_string(ctx, a.encode("utf-8"))
current = lib.arb_app(ctx, current, arg_tree)
result = lib.arb_reduce(ctx, current, 1_000_000_000)
return to_string(ctx, result)
def native_run_named(bundle_bytes, name, args):
"""Run via native bundle loader with named export (fast)."""
buf = c_bytes(bundle_bytes)
term = lib.arb_load_bundle(ctx, buf, len(bundle_bytes), name.encode("utf-8"))
if term == 0:
raise RuntimeError(f"load_bundle({name!r}) failed")
current = term
for a in args:
arg_tree = lib.arb_of_string(ctx, a.encode("utf-8"))
current = lib.arb_app(ctx, current, arg_tree)
result = lib.arb_reduce(ctx, current, 1_000_000_000)
return to_string(ctx, result)
# ============================================================================
# Tests
# ============================================================================
all_ok = True
def check(label, got, want):
global all_ok
if got != want:
print(f"FAIL {label}: got {got!r}, want {want!r}")
all_ok = False
else:
print(f"PASS {label}: {got!r}")
# Test 1: id via kernel
print("\n--- Test 1: id (kernel path) ---")
bundle = read_bundle("id.arboricx")
t0 = time.time()
result = kernel_run(bundle, ["hello"])
t1 = time.time()
check("id kernel", result, "hello")
print(f" time: {(t1 - t0) * 1000:.1f} ms")
# Test 2: id via native
print("\n--- Test 2: id (native path) ---")
t0 = time.time()
result = native_run_default(bundle, ["hello"])
t1 = time.time()
check("id native", result, "hello")
print(f" time: {(t1 - t0) * 1000:.1f} ms")
# Test 3: append via kernel
print("\n--- Test 3: append (kernel path) ---")
bundle = read_bundle("append.arboricx")
t0 = time.time()
result = kernel_run(bundle, ["Hello, ", "world!"])
t1 = time.time()
check("append kernel", result, "Hello, world!")
print(f" time: {(t1 - t0) * 1000:.1f} ms")
# Test 4: append via native
print("\n--- Test 4: append (native path) ---")
t0 = time.time()
result = native_run_default(bundle, ["Hello, ", "world!"])
t1 = time.time()
check("append native", result, "Hello, world!")
print(f" time: {(t1 - t0) * 1000:.1f} ms")
# Test 5: append via native named export
print("\n--- Test 5: append via named export 'root' ---")
t0 = time.time()
result = native_run_named(bundle, "root", ["Hello, ", "world!"])
t1 = time.time()
check("append named", result, "Hello, world!")
print(f" time: {(t1 - t0) * 1000:.1f} ms")
# Test 6: true / false via native
print("\n--- Test 6: true / false (native path) ---")
for name, expected in [("true.arboricx", True), ("false.arboricx", False)]:
bundle = read_bundle(name)
buf = c_bytes(bundle)
term = lib.arb_load_bundle_default(ctx, buf, len(bundle))
result = lib.arb_reduce(ctx, term, 1_000_000_000)
check(f"{name} bool", to_bool(ctx, result), expected)
# Test 7: number roundtrip
print("\n--- Test 7: number roundtrip ---")
num_tree = lib.arb_of_number(ctx, 42)
check("number 42", to_number(ctx, num_tree), 42)
# Test 8: string roundtrip
print("\n--- Test 8: string roundtrip ---")
str_tree = lib.arb_of_string(ctx, b"hello")
check("string hello", to_string(ctx, str_tree), "hello")
lib.arboricx_free(ctx)
if all_ok:
print("\nAll tests passed!")
sys.exit(0)
else:
print("\nSome tests failed!")
sys.exit(1)

View File

@@ -0,0 +1,92 @@
const std = @import("std");
// Minimal Node definition for the DAG format (no App variant for kernels)
const Node = union(enum(u8)) {
leaf,
stem: struct { child: u32 },
fork: struct { left: u32, right: u32 },
};
fn parseLine(line: []const u8) !Node {
var it = std.mem.splitScalar(u8, std.mem.trim(u8, line, " \t\n\r"), ' ');
const tag = it.next() orelse return error.EmptyLine;
if (std.mem.eql(u8, tag, "leaf")) {
return .leaf;
} else if (std.mem.eql(u8, tag, "stem")) {
const child_str = it.next() orelse return error.MissingChild;
const child = try std.fmt.parseInt(u32, child_str, 10);
return .{ .stem = .{ .child = child } };
} else if (std.mem.eql(u8, tag, "fork")) {
const left_str = it.next() orelse return error.MissingLeft;
const right_str = it.next() orelse return error.MissingRight;
const left = try std.fmt.parseInt(u32, left_str, 10);
const right = try std.fmt.parseInt(u32, right_str, 10);
return .{ .fork = .{ .left = left, .right = right } };
} else {
return error.UnknownTag;
}
}
pub fn main(init: std.process.Init) !void {
const gpa = init.gpa;
const io = init.io;
const args = try init.minimal.args.toSlice(init.arena.allocator());
if (args.len != 3) {
std.debug.print("Usage: gen_kernel <input.dag> <output.zig>\n", .{});
std.process.exit(1);
}
const input_path = args[1];
const output_path = args[2];
const source = try std.Io.Dir.cwd().readFileAlloc(io, input_path, gpa, .limited(10 * 1024 * 1024));
defer gpa.free(source);
var nodes = std.ArrayList(Node).empty;
defer nodes.deinit(gpa);
var it = std.mem.splitScalar(u8, source, '\n');
const root_line = it.next() orelse return error.EmptyFile;
const root = try std.fmt.parseInt(u32, std.mem.trim(u8, root_line, " \t\n\r"), 10);
while (it.next()) |line| {
const trimmed = std.mem.trim(u8, line, " \t\n\r");
if (trimmed.len == 0) continue;
const node = try parseLine(trimmed);
try nodes.append(gpa, node);
}
const file = try std.Io.Dir.cwd().createFile(io, output_path, .{});
defer file.close(io);
var buf: [4096]u8 = undefined;
var writer = file.writer(io, &buf);
try writer.interface.writeAll("// Auto-generated from ");
try writer.interface.writeAll(input_path);
try writer.interface.writeAll("\n// Do not edit manually.\n\n");
try writer.interface.writeAll("pub const NodeTag = enum(u8) { leaf = 0, stem = 1, fork = 2 };\n\n");
try writer.interface.writeAll("pub const Node = union(NodeTag) {\n");
try writer.interface.writeAll(" leaf,\n");
try writer.interface.writeAll(" stem: struct { child: u32 },\n");
try writer.interface.writeAll(" fork: struct { left: u32, right: u32 },\n");
try writer.interface.writeAll("};\n\n");
try writer.interface.print("pub const kernel_root: u32 = {d};\n\n", .{root});
try writer.interface.writeAll("pub const kernel_nodes = [_]Node{\n");
for (nodes.items) |node| {
switch (node) {
.leaf => try writer.interface.writeAll(" .leaf,\n"),
.stem => |s| try writer.interface.print(" .{{ .stem = .{{ .child = {d} }} }},\n", .{s.child}),
.fork => |f| try writer.interface.print(" .{{ .fork = .{{ .left = {d}, .right = {d} }} }},\n", .{f.left, f.right}),
}
}
try writer.interface.writeAll("};\n");
try writer.flush();
std.debug.print("Generated {d} kernel nodes, root={d} -> {s}\n", .{ nodes.items.len, root, output_path });
}

6
flake.lock generated
View File

@@ -20,11 +20,11 @@
}, },
"nixpkgs": { "nixpkgs": {
"locked": { "locked": {
"lastModified": 1734566935, "lastModified": 1778505177,
"narHash": "sha256-cnBItmSwoH132tH3D4jxmMLVmk8G5VJ6q/SC3kszv9E=", "narHash": "sha256-ao5+JS50HqNt/dtm4zuiQI+IXOn6hw50W6RTwUKYTww=",
"owner": "NixOS", "owner": "NixOS",
"repo": "nixpkgs", "repo": "nixpkgs",
"rev": "087408a407440892c1b00d80360fd64639b8091d", "rev": "fb2ce70b4ae882574081225eb3c2872f39418df3",
"type": "github" "type": "github"
}, },
"original": { "original": {

View File

@@ -29,9 +29,82 @@
customGHC = haskellPackages.ghcWithPackages (hpkgs: with hpkgs; [ customGHC = haskellPackages.ghcWithPackages (hpkgs: with hpkgs; [
megaparsec megaparsec
]); ]);
# ------------------------------------------------------------------
# Zig Arboricx host
# ------------------------------------------------------------------
tricuZig = pkgs.stdenv.mkDerivation {
pname = "tricu-zig";
version = "0.1.0";
src = ./ext/zig;
nativeBuildInputs = [ pkgs.zig ];
buildPhase = ''
export ZIG_GLOBAL_CACHE_DIR=$TMPDIR/zig-cache
zig build
'';
installPhase = ''
mkdir -p $out/bin $out/lib $out/include
cp zig-out/bin/* $out/bin/ 2>/dev/null || true
cp zig-out/lib/* $out/lib/ 2>/dev/null || true
cp include/arboricx.h $out/include/
'';
};
# Separate test target — not included in `nix flake check`
tricuZigTests = pkgs.stdenv.mkDerivation {
pname = "tricu-zig-tests";
version = "0.1.0";
src = ./.;
nativeBuildInputs = [ pkgs.gcc pkgs.python3 tricuZig ];
buildPhase = "true";
doCheck = true;
checkPhase = ''
export LD_LIBRARY_PATH=${tricuZig}/lib:$LD_LIBRARY_PATH
ulimit -s 32768
cd ext/zig
# C ABI smoke test
gcc -o /tmp/c_abi_test tests/c_abi_test.c \
-I ${tricuZig}/include -L ${tricuZig}/lib -larboricx \
-Wl,-rpath,${tricuZig}/lib
/tmp/c_abi_test
# Kernel path append test
gcc -o /tmp/c_abi_append_test tests/c_abi_append_test.c \
-I ${tricuZig}/include -L ${tricuZig}/lib -larboricx \
-Wl,-rpath,${tricuZig}/lib
/tmp/c_abi_append_test
# Native bundle tests
gcc -o /tmp/native_bundle_append_test tests/native_bundle_append_test.c \
-I ${tricuZig}/include -L ${tricuZig}/lib -larboricx \
-Wl,-rpath,${tricuZig}/lib
/tmp/native_bundle_append_test
gcc -o /tmp/native_bundle_id_test tests/native_bundle_id_test.c \
-I ${tricuZig}/include -L ${tricuZig}/lib -larboricx \
-Wl,-rpath,${tricuZig}/lib
/tmp/native_bundle_id_test
gcc -o /tmp/native_bundle_bools_test tests/native_bundle_bools_test.c \
-I ${tricuZig}/include -L ${tricuZig}/lib -larboricx \
-Wl,-rpath,${tricuZig}/lib
/tmp/native_bundle_bools_test
# Python FFI test
ARBORICX_LIB=${tricuZig}/lib/libarboricx.so \
python3 tests/python_ffi_test.py
mkdir -p $out
echo "All Zig tests passed" > $out/result
'';
};
in { in {
packages.${packageName} = tricuPackage; packages.${packageName} = tricuPackage;
packages.default = tricuPackage; packages.default = tricuPackage;
packages.tricu-zig = tricuZig;
packages.tricu-zig-tests = tricuZigTests;
checks.${packageName} = tricuPackageTests; checks.${packageName} = tricuPackageTests;
checks.default = tricuPackageTests; checks.default = tricuPackageTests;
@@ -43,10 +116,14 @@
haskellPackages.ghcid haskellPackages.ghcid
customGHC customGHC
upx upx
zig
gcc
python3
]; ];
inputsFrom = [ inputsFrom = [
tricuPackage tricuPackage
tricuZig
]; ];
}; };

23
lib/arboricx-dispatch.tri Normal file
View File

@@ -0,0 +1,23 @@
!import "arboricx.tri" !Local
!import "patterns.tri" !Local
-- Multi-purpose kernel dispatch.
--
-- runArboricxTyped tag bundleBytes args
-- tag 0 → hostTree (runArboricxToTree)
-- tag 1 → hostString (runArboricxToString)
-- tag 2 → hostNumber (runArboricxToNumber)
-- tag 3 → hostBool (runArboricxToBool)
-- tag 4 → hostList (runArboricxToList)
-- tag 5 → hostBytes (runArboricxToBytes)
-- otherwise → err 99 bundleBytes
runArboricxTyped = (tag bs args :
match tag
[[(equal? hostTreeTag) (_ : runArboricxToTree bs args)]
[(equal? hostStringTag) (_ : runArboricxToString bs args)]
[(equal? hostNumberTag) (_ : runArboricxToNumber bs args)]
[(equal? hostBoolTag) (_ : runArboricxToBool bs args)]
[(equal? hostListTag) (_ : runArboricxToList bs args)]
[(equal? hostBytesTag) (_ : runArboricxToBytes bs args)]
[otherwise (_ : err 99 bs)]])

View File

@@ -1,6 +1,7 @@
module Main where module Main where
import ContentStore (initContentStore, loadEnvironment, resolveExportTarget) import ContentStore (initContentStore, loadEnvironment, loadTerm, resolveExportTarget)
import System.Exit (die)
import Server (runServer) import Server (runServer)
import Eval (evalTricu, mainResult, result) import Eval (evalTricu, mainResult, result)
import FileEval import FileEval
@@ -32,6 +33,7 @@ data TricuArgs
| Export { hash :: String, exportNameOpt :: String, outFile :: FilePath, names :: [String] } | Export { hash :: String, exportNameOpt :: String, outFile :: FilePath, names :: [String] }
| Import { inFile :: FilePath } | Import { inFile :: FilePath }
| Serve { host :: String, port :: Int } | Serve { host :: String, port :: Int }
| ExportDag { target :: String, outFile :: FilePath }
deriving (Show, Data, Typeable) deriving (Show, Data, Typeable)
replMode :: TricuArgs replMode :: TricuArgs
@@ -112,10 +114,21 @@ serveMode = Serve
&= explicit &= explicit
&= name "server" &= name "server"
exportDagMode :: TricuArgs
exportDagMode = ExportDag
{ target = def &= help "Stored term name or hash to export as a DAG node table."
&= name "t" &= typ "NAME_OR_HASH"
, outFile = def &= help "Optional output file path. Defaults to stdout."
&= name "o" &= typ "FILE"
}
&= help "Export a term's Merkle DAG as a topologically-sorted node table for host embedding."
&= explicit
&= name "export-dag"
main :: IO () main :: IO ()
main = do main = do
let versionStr = "tricu Evaluator and REPL " ++ showVersion version let versionStr = "tricu Evaluator and REPL " ++ showVersion version
cmdArgsParsed <- cmdArgs $ modes [replMode, evaluateMode, decodeMode, compileMode, exportMode, importMode, serveMode] cmdArgsParsed <- cmdArgs $ modes [replMode, evaluateMode, decodeMode, compileMode, exportMode, importMode, serveMode, exportDagMode]
&= help "tricu: Exploring Tree Calculus" &= help "tricu: Exploring Tree Calculus"
&= program "tricu" &= program "tricu"
&= summary versionStr &= summary versionStr
@@ -191,6 +204,18 @@ main = do
putStrLn $ " GET /bundle/name/:name -- convenience endpoint" putStrLn $ " GET /bundle/name/:name -- convenience endpoint"
putStrLn $ " Content-Type: application/vnd.arboricx.bundle" putStrLn $ " Content-Type: application/vnd.arboricx.bundle"
runServer hostStr portNum runServer hostStr portNum
ExportDag { target = targetName, outFile = dagOutFile } -> do
conn <- initContentStore
maybeTerm <- loadTerm conn targetName
close conn
case maybeTerm of
Nothing -> die $ "Term not found: " ++ targetName
Just term -> do
let (rootIdx, nodes) = exportDag term
output = unlines $ show rootIdx : map (\(tag, refs) -> unwords (tag : map show refs)) nodes
if null dagOutFile
then putStr output
else writeFile dagOutFile output
runTricu :: String -> String runTricu :: String -> String
runTricu = formatT TreeCalculus . runTricuT runTricu = formatT TreeCalculus . runTricuT

View File

@@ -12,6 +12,7 @@ import System.Console.CmdArgs (Data, Typeable)
import qualified Data.ByteString as BS import qualified Data.ByteString as BS
import qualified Data.Map as Map import qualified Data.Map as Map
import qualified Data.Set as Set
import qualified Data.Text as T import qualified Data.Text as T
-- Tree Calculus Types -- Tree Calculus Types
@@ -296,3 +297,41 @@ decodeResult tc =
|| n == 9 || n == 9
|| n == 10 || n == 10
|| n == 13 || n == 13
-- ---------------------------------------------------------------------------
-- DAG node-table export (for host-language kernel embedding)
-- ---------------------------------------------------------------------------
-- | Export a term's Merkle DAG as a topologically-sorted node table.
-- Children appear before parents so all index references are forward.
-- Returns (root index, list of (tag, [child_indices])).
exportDag :: T -> (Int, [(String, [Int])])
exportDag term =
let (root, acc, _) = collectDag term [] Set.empty
-- acc is in reverse post-order (children first, root last)
ordered = reverse acc
idxMap = Map.fromList [(h, i) | (i, (h, _)) <- zip [0..] ordered]
rootIdx = idxMap Map.! root
lines_ = map (formatNode idxMap . snd) ordered
in (rootIdx, lines_)
where
collectDag :: T -> [(MerkleHash, Node)] -> Set.Set MerkleHash -> (MerkleHash, [(MerkleHash, Node)], Set.Set MerkleHash)
collectDag Leaf acc seen =
let h = nodeHash NLeaf
in if Set.member h seen then (h, acc, seen) else (h, (h, NLeaf) : acc, Set.insert h seen)
collectDag (Stem t) acc seen =
let (ch, acc', seen') = collectDag t acc seen
node = NStem ch
h = nodeHash node
in if Set.member h seen' then (h, acc', seen') else (h, (h, node) : acc', Set.insert h seen')
collectDag (Fork l r) acc seen =
let (lh, acc', seen') = collectDag l acc seen
(rh, acc'', seen'') = collectDag r acc' seen'
node = NFork lh rh
h = nodeHash node
in if Set.member h seen'' then (h, acc'', seen'') else (h, (h, node) : acc'', Set.insert h seen'')
formatNode :: Map.Map MerkleHash Int -> Node -> (String, [Int])
formatNode _ NLeaf = ("leaf", [])
formatNode idxMap (NStem ch) = ("stem", [idxMap Map.! ch])
formatNode idxMap (NFork l r) = ("fork", [idxMap Map.! l, idxMap Map.! r])