|
|
--- |
|
|
language: en |
|
|
license: mit |
|
|
tags: |
|
|
- image-classification |
|
|
- imagenet |
|
|
- multi-scale |
|
|
- feature-geometry |
|
|
- david |
|
|
datasets: |
|
|
- imagenet-1k |
|
|
metrics: |
|
|
- accuracy |
|
|
model-index: |
|
|
- name: David-decoupled-cantor_scale |
|
|
results: |
|
|
- task: |
|
|
type: image-classification |
|
|
dataset: |
|
|
name: ImageNet-1K |
|
|
type: imagenet-1k |
|
|
metrics: |
|
|
- type: accuracy |
|
|
value: 78.90 |
|
|
--- |
|
|
|
|
|
# David: Multi-Scale Feature Classifier |
|
|
|
|
|
**David** is a multi-scale deep learning classifier that uses feature geometry (pentachora/4-simplexes) |
|
|
as class prototypes with role-weighted similarity computation (Rose Loss). |
|
|
|
|
|
## Model Details |
|
|
|
|
|
### Architecture |
|
|
- **Preset**: clip_vit_b16_cantor_big_window |
|
|
- **Sharing Mode**: decoupled |
|
|
- **Fusion Mode**: cantor_scale |
|
|
- **Scales**: [256, 512, 768, 1024, 2048, 4096] |
|
|
- **Feature Dim**: 512 |
|
|
- **Parameters**: 60,452,103 |
|
|
|
|
|
### Training Configuration |
|
|
- **Dataset**: AbstractPhil/imagenet-clip-features-orderly |
|
|
- **Model Variant**: clip_vit_b16 |
|
|
- **Epochs**: 5 |
|
|
- **Batch Size**: 512 |
|
|
- **Learning Rate**: 0.001 |
|
|
- **Rose Loss Weight**: 0.1 → 0.5 |
|
|
- **Cayley Loss**: False |
|
|
|
|
|
## Performance |
|
|
|
|
|
### Best Results |
|
|
- **Validation Accuracy**: 78.90% |
|
|
- **Best Epoch**: 4 |
|
|
- **Final Train Accuracy**: 86.63% |
|
|
|
|
|
### Per-Scale Performance |
|
|
- **Scale 256**: 74.62% |
|
|
- **Scale 512**: 77.18% |
|
|
- **Scale 768**: 77.98% |
|
|
- **Scale 1024**: 77.99% |
|
|
- **Scale 2048**: 77.91% |
|
|
- **Scale 4096**: 77.97% |
|
|
|
|
|
|
|
|
## Usage |
|
|
|
|
|
### Quick Model Lookup |
|
|
|
|
|
**Check `MODELS_INDEX.json` in the repo root** - it lists all trained models sorted by accuracy with links to weights and configs. |
|
|
|
|
|
### Repository Structure |
|
|
|
|
|
``` |
|
|
AbstractPhil/gated-david/ |
|
|
├── MODELS_INDEX.json # 📊 Master index of all models (sorted by accuracy) |
|
|
├── README.md # This file |
|
|
├── best_model.json # Latest best model info |
|
|
├── weights/ |
|
|
│ └── clip_vit_b16_cantor_big_window/ |
|
|
│ └── 20251104_154540/ |
|
|
│ ├── MODEL_SUMMARY.txt # 🎯 Human-readable performance summary |
|
|
│ ├── training_history.json # 📈 Epoch-by-epoch training curve |
|
|
│ ├── best_model_acc78.90.safetensors # ⭐ Accuracy in filename! |
|
|
│ ├── best_model_acc78.90_metadata.json |
|
|
│ ├── final_model.safetensors |
|
|
│ ├── checkpoint_epoch_X_accYY.YY.safetensors |
|
|
│ ├── david_config.json |
|
|
│ └── train_config.json |
|
|
└── runs/ |
|
|
└── clip_vit_b16_cantor_big_window/ |
|
|
└── 20251104_154540/ |
|
|
└── events.out.tfevents.* # TensorBoard logs |
|
|
``` |
|
|
|
|
|
### Loading the Model |
|
|
|
|
|
```python |
|
|
from geovocab2.train.model.core.david import David, DavidArchitectureConfig |
|
|
from huggingface_hub import hf_hub_download |
|
|
|
|
|
# Browse available models in MODELS_INDEX.json first! |
|
|
|
|
|
# Specify model variant and run |
|
|
model_name = "clip_vit_b16_cantor_big_window" |
|
|
run_id = "20251104_154540" |
|
|
accuracy = "78.90" # From MODELS_INDEX.json |
|
|
|
|
|
# Download config |
|
|
config_path = hf_hub_download( |
|
|
repo_id="AbstractPhil/gated-david", |
|
|
filename=f"weights/{model_name}/{run_id}/david_config.json" |
|
|
) |
|
|
config = DavidArchitectureConfig.from_json(config_path) |
|
|
|
|
|
# Download weights (accuracy in filename!) |
|
|
weights_path = hf_hub_download( |
|
|
repo_id="AbstractPhil/gated-david", |
|
|
filename=f"weights/{model_name}/{run_id}/best_model_acc{accuracy}.safetensors" |
|
|
) |
|
|
|
|
|
# Download training history (optional - see full training curve) |
|
|
history_path = hf_hub_download( |
|
|
repo_id="AbstractPhil/gated-david", |
|
|
filename=f"weights/{model_name}/{run_id}/training_history.json" |
|
|
) |
|
|
|
|
|
# Load model |
|
|
from safetensors.torch import load_file |
|
|
david = David.from_config(config) |
|
|
david.load_state_dict(load_file(weights_path)) |
|
|
david.eval() |
|
|
``` |
|
|
|
|
|
### Inference |
|
|
|
|
|
```python |
|
|
import torch |
|
|
import torch.nn.functional as F |
|
|
|
|
|
# Assuming you have CLIP features (512-dim for ViT-B/16) |
|
|
features = get_clip_features(image) # [1, 512] |
|
|
|
|
|
# Load anchors |
|
|
anchors_dict = torch.load("anchors.pth") |
|
|
|
|
|
# Forward pass |
|
|
with torch.no_grad(): |
|
|
logits, _ = david(features, anchors_dict) |
|
|
predictions = logits.argmax(dim=-1) |
|
|
``` |
|
|
|
|
|
## Architecture Overview |
|
|
|
|
|
### Multi-Scale Processing |
|
|
David processes inputs at multiple scales (256, 512, 768, 1024, 2048, 4096), |
|
|
allowing it to capture both coarse and fine-grained features. |
|
|
|
|
|
### Feature Geometry |
|
|
Each class is represented by a pentachoron (4-simplex) in embedding space with 5 vertices: |
|
|
- **Anchor**: Primary class representative |
|
|
- **Need**: Complementary direction |
|
|
- **Relation**: Contextual alignment |
|
|
- **Purpose**: Functional direction |
|
|
- **Observer**: Meta-perspective |
|
|
|
|
|
### Rose Loss |
|
|
Similarity computation uses role-weighted cosine similarities: |
|
|
``` |
|
|
score = w_anchor * sim(z, anchor) + w_need * sim(z, need) + ... |
|
|
``` |
|
|
|
|
|
### Fusion Strategy |
|
|
**cantor_scale**: Intelligently combines predictions from multiple scales. |
|
|
|
|
|
## Training Details |
|
|
|
|
|
### Loss Components |
|
|
- **Cross-Entropy**: Standard classification loss |
|
|
- **Rose Loss**: Pentachora role-weighted margin loss (weight: 0.1→0.5) |
|
|
- **Cayley Loss**: Geometric regularization (disabled) |
|
|
|
|
|
### Optimization |
|
|
- **Optimizer**: AdamW |
|
|
- **Weight Decay**: 1e-05 |
|
|
- **Scheduler**: cosine_restarts |
|
|
- **Gradient Clip**: 10.0 |
|
|
- **Mixed Precision**: False |
|
|
|
|
|
## Citation |
|
|
|
|
|
```bibtex |
|
|
@software{david_classifier_2025, |
|
|
title = {David: Multi-Scale Feature Classifier}, |
|
|
author = {AbstractPhil}, |
|
|
year = {2025}, |
|
|
url = {https://huggingface.co/AbstractPhil/gated-david}, |
|
|
note = {Run ID: 20251104_154540} |
|
|
} |
|
|
``` |
|
|
|
|
|
## License |
|
|
|
|
|
MIT License |
|
|
|
|
|
## Acknowledgments |
|
|
|
|
|
Built with lattice geometry and multi-scale deep learning. |
|
|
Special thanks to Claude (Anthropic) for debugging assistance. |
|
|
|
|
|
--- |
|
|
|
|
|
*Generated on 2025-11-04 15:57:33* |
|
|
|