Dashboard

QubGPU Neural Datagram Protocol Platform

β€”
GPU
β€”
VRAM
β€”
CUDA
β€”
Datasets
β€”
Training

Knowledge Packet Format

0xAA
PRE
SRC
8bit
DST
8bit
TYP
8bit
LEN
16bit
PAYLOAD
var
CRC-16
16bit

6-byte header + variable payload + 2-byte CRC. Overhead: 8 bytes. Efficiency: 94-99.6%

Quick Links

Datasets

Upload, convert, and manage training datasets

πŸ“€

Drop JSONL file here or click to browse

Supports .jsonl files with {instruction, output} pairs

Your Datasets

Loading...

Training

Train models with the QubGPU protocol

Training Configuration

Chat

Test your trained model

No model loaded
Q
Hello! I'm the QubGPU assistant. Load a model and start chatting. You can use a base model or a fine-tuned LoRA adapter.

Protocol Lab

Encode text into Knowledge Packets and inspect the binary format

Packet Encoder

Protocol Benchmark

Benchmark Results

NDP protocol performance across text, image, and audio data

Loading benchmark results...

Neural Router

Direct Knowledge Injection β€” Like a Router Routes Packets, Not Like a Student Studies Books

0
Packets Injected
0
Knowledge Entries
0ms
Total Inject Time
0MB
Memory Bank

πŸ“ Inject Text Knowledge

Instantly encode text into associative memory β€” no training needed

🎯 Inject Fact (ROME-style)

Surgical rank-1 weight update targeting specific neurons

πŸ” Query Associative Memory

CAM-style parallel lookup β€” retrieves stored knowledge via cosine similarity

πŸ“Š Neural Router vs Traditional Training

πŸ—οΈ Neural Router Architecture

╔══════════════════════════════════════════════════════════════════════════╗
β•‘                           NEURAL ROUTER                                 β•‘
β•‘                                                                         β•‘
β•‘   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β•‘
β•‘   β”‚ INGRESS  │───▢│  ROUTING  │───▢│  FORWARDING  │───▢│  EGRESS    β”‚  β•‘
β•‘   β”‚ PIPELINE β”‚    β”‚   TABLE   β”‚    β”‚    ENGINE     β”‚    β”‚  PIPELINE  β”‚  β•‘
β•‘   β”‚          β”‚    β”‚ (CAM/TCAM)β”‚    β”‚              β”‚    β”‚            β”‚  β•‘
β•‘   β”‚ β€’ CRC-32 β”‚    β”‚           β”‚    β”‚ β€’ OVERWRITE  β”‚    β”‚ β€’ Verify   β”‚  β•‘
β•‘   β”‚ β€’ Defrag β”‚    β”‚ β€’ Layer   β”‚    β”‚ β€’ BLEND      β”‚    β”‚ β€’ Flush    β”‚  β•‘
β•‘   β”‚ β€’ Decode β”‚    β”‚ β€’ Module  β”‚    β”‚ β€’ RANK1      β”‚    β”‚ β€’ Cache    β”‚  β•‘
β•‘   β”‚ β€’ Align  β”‚    β”‚ β€’ Neuron  β”‚    β”‚ β€’ IMPRINT    β”‚    β”‚ β€’ Confirm  β”‚  β•‘
β•‘   β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β”‚ β€’ HOPFIELD   β”‚    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β•‘
β•‘                                    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                     β•‘
β•‘                                                                         β•‘
β•‘   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β•‘
β•‘   β”‚              ASSOCIATIVE MEMORY BANK (GPU-Resident)              β”‚  β•‘
β•‘   β”‚   [Key₁|Val₁]  [Keyβ‚‚|Valβ‚‚]  ...  [Keyβ‚™|Valβ‚™]                  β”‚  β•‘
β•‘   β”‚   Capacity: 8M entries @ 768-dim on RTX 3090 (24GB)             β”‚  β•‘
β•‘   β”‚   Retrieval: <1ms via Tensor Core matrix multiply               β”‚  β•‘
β•‘   β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β•‘
β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•