A Brainf*ck to native Linux x86-64 ELF64 compiler, written in NC Lang. https://github.com/Codycody31/nc-bfcc
Find a file
2026-03-21 17:21:44 -04:00
examples fix 2026-03-21 17:21:44 -04:00
.gitignore feat: initial commit 2026-02-27 17:38:18 -05:00
bfcc.nc fix 2026-03-21 17:21:44 -04:00
ncc.toml feat: initial commit 2026-02-27 17:38:18 -05:00
README.md docs(readme): add benchmarks and implementation comparisons 2026-02-28 19:54:40 -05:00

bfcc

A Brainf*ck to native Linux x86-64 ELF64 compiler, written in NC Lang.

Compiles .bf/.b/.fck source files directly into statically-linked ELF binaries using Linux syscalls.

Usage

bfcc <file> [-o output]

The default output is a.out.

How it works

  1. Tokenizes Brainf*ck source into instructions
  2. Emits x86-64 machine code (using r12 as the tape pointer)
  3. Wraps it in a raw ELF64 binary with a 30,000-byte BSS tape

Examples

ncc build ./bfcc.nc
chmod +x ./bfcc
./bfcc examples/mandlebrot.b -o mandlebrot
./mandlebrot

Benchmarks

nc-bfcc results on AMD Ryzen 7 5700, Linux x86-64. Best of 5 runs.

Program Compile Time Run Time Binary Size
mandelbrot.b 0.04s 0.55s 19 KB
bottles.b <0.01s <0.001s 8 KB
hanoi.b 0.57s 0.08s 73 KB

Compiler build time: 0.10s (ncc build ./bfcc.nc). Compiler binary: 134 KB.

Comparison with Other Brainfuck Implementations

Mandelbrot runtimes collected from published benchmarks. Hardware varies across projects (noted where known), so treat cross-project comparisons as approximate. nc-bfcc numbers are from the table above.

AOT Compilers & JITs

Implementation Language Approach Mandelbrot Bottles Notes
nc-bfcc NC Lang AOT → x86-64 ELF 0.55s <0.001s This project. Direct machine code, no libc.
Tritium C Interpreter + JIT (DynASM) ~0.30s Fastest known BF JIT. Uses LuaJIT's DynASM.
BF-JIT Rust JIT (x86-64/AArch64) ~0.39s (M3) / ~1.06s (i5) Hybrid AOT/JIT. 53x speedup over naive.
bfc Rust AOT via LLVM 0.007s Industrial-grade. Full LLVM optimization pipeline.
Cranefack Rust Interpreter + JIT (Cranelift) Multiple optimization levels. Built-in benchmarking.
Nayuki BF→C Python Transpiler → C ~0.64s (gcc -O1) BF→C, then relies on gcc/clang for codegen.
Geschwindigkeitsficken C Transpiler → C / x86-64 asm 9 optimization strategies. Outputs C or asm.
esotope-bfc Python Transpiler → C Gold standard BF→C optimizer (2009). Full constant propagation.
awib Brainfuck Transpiler → C/Go/Java/Rust/i386 ELF Written in BF itself. Targets 7 languages.

Optimized Interpreters

Implementation Language Mandelbrot Notes
bf-cpp C++ ~0.77s Claims fastest non-JIT interpreter.
BrainForked C ~0.98s Optimized interpreter.
Tritium (interp only) C ~1.16s Same project, interpreter-only mode (-r).
sbfi C ~1.14s Simple optimized interpreter.
bffsree C ~1.24s (M3) Fastest pure interpreter (no JIT/ASM). "Double-instructioning" technique.
bff4 C Long-standing fast interpreter. dbfi: 248ms.

Transpiler Targets (Van Heusden compilersuite)

Mandelbrot runtimes via vanheusden.com BF→X transpiler:

Target Mandelbrot
Rust ~1.30s
x86 asm ~1.48s
C (GCC) ~1.61s
Go ~3.58s
Node.js ~6.24s
Java ~10.66s
Python (PyPy) ~8.12s
Ruby ~1m42s
Bash ~5h16m

Key Takeaways

  • nc-bfcc places in Tier 1 among BF compilers, with mandelbrot at 0.55s — competitive with JIT compilers and faster than most interpreters and transpiler pipelines.
  • Only Tritium's DynASM JIT (~0.30s) and BF-JIT on Apple M3 (~0.39s) are clearly faster, both using runtime code generation.
  • nc-bfcc produces tiny, self-contained binaries (873 KB) with zero dependencies — no libc, no dynamic linking.
  • Build speed is near-instant — the compiler itself builds in 0.10s and compiles mandelbrot in 0.04s. Most LLVM/Cranelift-based alternatives have multi-second or multi-minute build times for the compiler itself.
  • bffsree deserves special mention as the fastest pure interpreter (no JIT, no native codegen), achieving ~1.24s on mandelbrot through aggressive bytecode-level optimization.