Introduction to Pixray

A simple explanation for what happens under the scene.

The main function of Pixray is the use of CLIP to guide image generation from text.

Pixray uses CLIP (Contrastive Language-Image Pre-Training) by OpenAI as its perception engine. Check out CLIP's official repository and paper (Learning Transferable Visual Models From Natural Language Supervision) to learn more about CLIP.

The vqgan drawer uses Vector Quantized Generative Adversarial Networks, or simply VQGAN, by Esser et al. from the paper Taming Transformers for High-Resolution Image Synthesis. The original code for VQGAN is in this repository.

The clipdraw drawer uses ClipDraw by Kevin Frans, from the paper CLIPDraw: Exploring Text-to-Drawing Synthesis through Language-Image Encoders.

The diffusion drawer uses Diffusion models, and models are trained by Katherine Crowson. The code we use for diffusion is in this repository.

The fft drawer is from the Aphantasia library by vadim epstein. The original code for fft drawer is in this repository.

The pixel drawer is created by dribnet, the author of pixray.

The algorithm for generation is encoding prompts through CLIP, generating embeddings, which are compared against the output image for a loss that can backpropagate through the generation engine to learn a latent space that can minimize the loss. This minimal loss result is the output that most resembles the text which it was prompted.

The difference between the drawers lies between how the image itself is generated, and what the latent space of the model represents. It is possible to generate an image without an underlying model, but simply optimizing a tensor.

Last updated