Update README.md
Browse files
README.md
CHANGED
|
@@ -111,17 +111,29 @@ img {
|
|
| 111 |
| [](#model-architecture)
|
| 112 |
<!-- | [](#datasets) -->
|
| 113 |
|
| 114 |
-
[Sortformer](https://arxiv.org/abs/2409.06656)[1] is a novel end-to-end neural model for speaker diarization, trained with unconventional objectives compared to existing end-to-end diarization models.
|
| 115 |
|
| 116 |
<div align="center">
|
| 117 |
<img src="sortformer_intro.png" width="750" />
|
| 118 |
</div>
|
| 119 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 120 |
Sortformer resolves permutation problem in diarization following the arrival-time order of the speech segments from each speaker.
|
| 121 |
|
| 122 |
## Model Architecture
|
| 123 |
|
| 124 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 125 |
Speech Tasks (NEST)](https://arxiv.org/abs/2408.13106)[2] which is based on [Fast-Conformer](https://arxiv.org/abs/2305.05084)[3] encoder. Following that, an 18-layer Transformer[4] encoder with hidden size of 192,
|
| 126 |
and two feedforward layers with 4 sigmoid outputs for each frame input at the top layer. More information can be found in the [Sortformer paper](https://arxiv.org/abs/2409.06656)[1].
|
| 127 |
|
|
@@ -130,9 +142,7 @@ and two feedforward layers with 4 sigmoid outputs for each frame input at the to
|
|
| 130 |
</div>
|
| 131 |
|
| 132 |
|
| 133 |
-
|
| 134 |
-
<img src="streaming_sortformer_ani.gif" width="450" />
|
| 135 |
-
</div>
|
| 136 |
|
| 137 |
## NVIDIA NeMo
|
| 138 |
|
|
|
|
| 111 |
| [](#model-architecture)
|
| 112 |
<!-- | [](#datasets) -->
|
| 113 |
|
| 114 |
+
This model is a streaming version of Sortformer diarizer. [Sortformer](https://arxiv.org/abs/2409.06656)[1] is a novel end-to-end neural model for speaker diarization, trained with unconventional objectives compared to existing end-to-end diarization models.
|
| 115 |
|
| 116 |
<div align="center">
|
| 117 |
<img src="sortformer_intro.png" width="750" />
|
| 118 |
</div>
|
| 119 |
|
| 120 |
+
Streaming Sortformer approach employs an Arrival-Order Speaker Cache (AOSC) to store frame-level acoustic embeddings of previously observed speakers.
|
| 121 |
+
<div align="center">
|
| 122 |
+
<img src="streaming_sortformer_ani.gif" width="1400" />
|
| 123 |
+
</div>
|
| 124 |
+
|
| 125 |
Sortformer resolves permutation problem in diarization following the arrival-time order of the speech segments from each speaker.
|
| 126 |
|
| 127 |
## Model Architecture
|
| 128 |
|
| 129 |
+
Streaming sortformer employs pre-encode layer in the Fast-Conformer to generate speaker-cache. At each step, speaker cache is filtered to only retain the high-quality speaker cache vectors.
|
| 130 |
+
|
| 131 |
+
<div align="center">
|
| 132 |
+
<img src="streaming_steps.png" width="1400" />
|
| 133 |
+
</div>
|
| 134 |
+
|
| 135 |
+
|
| 136 |
+
Aside from speaker-cache management part, streaming Sortformer follows the architecture of the offline version of Sortformer. Sortformer consists of an L-size (18 layers) [NeMo Encoder for
|
| 137 |
Speech Tasks (NEST)](https://arxiv.org/abs/2408.13106)[2] which is based on [Fast-Conformer](https://arxiv.org/abs/2305.05084)[3] encoder. Following that, an 18-layer Transformer[4] encoder with hidden size of 192,
|
| 138 |
and two feedforward layers with 4 sigmoid outputs for each frame input at the top layer. More information can be found in the [Sortformer paper](https://arxiv.org/abs/2409.06656)[1].
|
| 139 |
|
|
|
|
| 142 |
</div>
|
| 143 |
|
| 144 |
|
| 145 |
+
|
|
|
|
|
|
|
| 146 |
|
| 147 |
## NVIDIA NeMo
|
| 148 |
|