Introduction to Pixray

A simple explanation for what happens under the scene.

The main function of Pixray is the use of CLIP to guide image generation from text.

Pixray uses CLIP (Contrastive Language-Image Pre-Training) by OpenAI as its perception engine. Check out CLIP's official repositoryarrow-up-right and paper (Learning Transferable Visual Models From Natural Language Supervisionarrow-up-right) to learn more about CLIP.

The vqgan drawer uses Vector Quantized Generative Adversarial Networks, or simply VQGAN, by Esser et al. from the paper Taming Transformers for High-Resolution Image Synthesisarrow-up-right. The original code for VQGAN is in this repositoryarrow-up-right.

The clipdraw drawer uses ClipDrawarrow-up-right by Kevin Franarrow-up-rights, from the paper CLIPDraw: Exploring Text-to-Drawing Synthesis through Language-Image Encodersarrow-up-right.

The diffusion drawer uses Diffusion modelsarrow-up-right, and models are trained by Katherine Crowsonarrow-up-right. The code we use for diffusion is in this repositoryarrow-up-right.

The fft drawer is from the Aphantasia library by vadim epsteinarrow-up-right. The original code for fft drawer is in this repositoryarrow-up-right.

The pixel drawer is created by dribnetarrow-up-right, the author of pixray.

The algorithm for generation is encoding prompts through CLIP, generating embeddings, which are compared against the output image for a loss that can backpropagate through the generation engine to learn a latent space that can minimize the loss. This minimal loss result is the output that most resembles the text which it was prompted.

The difference between the drawers lies between how the image itself is generated, and what the latent space of the model represents. It is possible to generate an image without an underlying model, but simply optimizing a tensor.

Last updated