Page cover

Core Research Projects

Our research is structured around five core development tracks, each designed to overcome key constraints in present-day AI architectures.

Errloom: A Next-Generation Reinforcement Learning Framework

Revolutionary post-training methodologies introducing novel concepts:

  • Musical Dynamics Integration: Virtual music is embedded into the inference process, using temperature modulation and spiking to investigate cymatic-like dynamics in reasoning.

  • Holoware Programming Language: A novel prompting domain-specific language (.syn) designed as a programming interface for reinforcement learning engineering.

  • Reimagined RL Abstractions: Rewards are reframed as gravity, rubrics as attractors, environments as woven looms, and rollout sets as dynamic tapestries.

  • Procedural Learning Architecture: Learning procedures are encoded through gravitational abstractions that connect directly to the model’s core kernel.

Ghamten: A Fusion of Discrete Auto-Encoding and High-Level Reasoning

Current Priority Project: Key Project Advancing the Semiodynamical Hypothesis:

  • Transformer-Native Language Development: Models generate non-human languages by leveraging reward attractors that drive meaning compression.

  • Two-Stage Process: Computational workloads are streamlined through a sequential process of semantic compression and semiodynamical extrusion

  • Hypercompressed Reasoning: Models process information as compressed units of meaning, similar to ‘zip files’ of semantic content, allowing for highly efficient reasoning.

  • Physics of Meaning: Meaning flows in a dynamic manner, bypassing static structures instead of adhering to fixed trajectories.

  • Technical Breakthrough: English provides only the initial foundation—Ghamten facilitates the creation of languages native to transformer architectures, integrating elements from all human languages.

Last updated