The Flax implementation on TPUs currently has a slight performance regression relative to the PyTorch implementations. The comparison can be seen here.
If you want to evaluate GPT-NeoX-20B for research purposes, please use the original GPT-Neox, Minimal PyTorch or Hugging Face implementations...
MIT License
Model Card: Whisper
This is the official codebase for running the automatic speech recognition (ASR) models (Whisper models) trained and released by OpenAI.
Following Model Cards for Model Reporting (Mitchell et al.), we're providing some information about the automatic speech...
In setting up ColabRating, we found a selection of Colabs already published on the web which had licenses that allowed us to publish them here.
Inevitably, that will mean that some Colabs that the creators would like to claim for their own ColabRating account will instead be assigned to this...
Making talking robots with GPT-2
This is a tutorial in using machine learning to generate realistic English text, in any style you like. It doesn't require any coding, and by the end you will have built a simple chatbot, using the state-of-the art GPT-2 model, and hopefully learned a little...
Stable Diffusion is a text-to-image latent diffusion model created by the researchers and engineers from CompVis, Stability AI and LAION. It's trained on 512x512 images from a subset of the LAION-5B database. This model uses a frozen CLIP ViT-L/14 text encoder to condition the model on text...
This notebook illustrates DALL·E mini inference pipeline.
Just want to play? Use directly the app.
For more understanding of the model, refer to the report.
StyleGAN3 + CLIP 🖼️
Generate images from text prompts using NVIDIA's StyleGAN3 with CLIP guidance.
Head over here if you want to be up to date with the changes to this notebook and play with other alternatives.
The original code was written by nshepperd* (https://github.com/nshepperd), and...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.