๐Ÿง  Mnemosyne-SCLM v3.0

Multimodal Stateful Language Model with Persistent Memory

Architecture

  • LLM: Llama 3.2 3B Instruct
  • Audio: Whisper Large v3
  • Memory: Multi-Scale (ST=512D, LT=256D)
  • EARCP: v3 with Online Learning
  • Context: 8K tokens

Author

Mike Amega (Logo) - Ame Web Studio

Downloads last month
23
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for amewebstudio/mnemosyne-multimodal-fused

Finetuned
(865)
this model