Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
Spaces:
LiKenun
/
ai-building-blocks
like
0
Running
on
Zero
App
Files
Files
Community
Fetching metadata from the HF Docker repository...
main
ai-building-blocks
57.5 kB
1 contributor
History:
23 commits
LiKenun
Switch text-to-image and automatic speech recognition (ASR) back to using the Hugging Face inference client; Zero GPU cannot accommodate the time it takes for those tasks
b71a3ad
about 1 month ago
.cursorignore
Safe
29 Bytes
Initial gallery
about 1 month ago
.gitattributes
Safe
1.52 kB
initial commit
about 1 month ago
.gitignore
Safe
173 Bytes
Initial gallery
about 1 month ago
README.md
Safe
10.4 kB
Switch text-to-image and automatic speech recognition (ASR) back to using the Hugging Face inference client; Zero GPU cannot accommodate the time it takes for those tasks
about 1 month ago
app.py
Safe
4.31 kB
Switch text-to-image and automatic speech recognition (ASR) back to using the Hugging Face inference client; Zero GPU cannot accommodate the time it takes for those tasks
about 1 month ago
automatic_speech_recognition.py
Safe
3.43 kB
Switch text-to-image and automatic speech recognition (ASR) back to using the Hugging Face inference client; Zero GPU cannot accommodate the time it takes for those tasks
about 1 month ago
chatbot.py
Safe
9.76 kB
Updated code to address “UserWarning: You have not specified a value for the `type` parameter”
about 1 month ago
image_classification.py
Safe
3.59 kB
Switch to use GPU instead of inference client
about 1 month ago
image_to_text.py
Safe
3.31 kB
Switch to use GPU instead of inference client
about 1 month ago
packages.txt
Safe
10 Bytes
Add required `espeak`
about 1 month ago
requirements.txt
Safe
338 Bytes
Enable low CPU memory usage with `accelerate` during model loading, which is useful on Hugging Face Spaces and other memory-constrained environments
about 1 month ago
text_to_image.py
Safe
1.84 kB
Switch text-to-image and automatic speech recognition (ASR) back to using the Hugging Face inference client; Zero GPU cannot accommodate the time it takes for those tasks
about 1 month ago
text_to_speech.py
Safe
2.71 kB
Switch to use GPU instead of inference client
about 1 month ago
translation.py
Safe
5.89 kB
Switch to use GPU instead of inference client
about 1 month ago
utils.py
Safe
10.2 kB
Switch to use GPU instead of inference client
about 1 month ago