🖼️
Pixray
  • Docs for Pixray
  • Introduction
    • Introduction to Pixray
  • Quickstart
    • Installation
    • Generating an Image
    • Using demo colabs
    • Generating online (without installation)
    • FAQ
  • Docs
    • Primary settings
    • Fine settings
    • Image control settings
    • Drawer Settings
    • Misc/Display settings
    • Losses
    • Functional video movements
    • Image filters
  • Tutorial
    • Changing drawers
    • Changing primary settings
    • Guiding output with prompts
    • Guiding output with images
    • Guiding output with losses
    • Guiding output with image filters (ColorMapper)
    • Making a video using generation frames
    • Making a video using functional movements
    • Making a video using different prompts (story mode)
    • Working with weaker machines
  • Advanced
    • Empirical testing - comparing output quality
    • Developing a custom loss
    • Developing a custom drawer
    • Developing a custom image filter
    • Developing a function video movement
    • Modifying Pixray
  • Future
    • make Image Filter subclass - colormapper as subclass
    • Guide video (move/story) within pixray
    • Batch run for images and video
Powered by GitBook
On this page
  1. Introduction

Introduction to Pixray

A simple explanation for what happens under the scene.

PreviousDocs for PixrayNextInstallation

Last updated 3 years ago

The main function of Pixray is the use of CLIP to guide image generation from text.

Pixray uses CLIP (Contrastive Language-Image Pre-Training) by OpenAI as its perception engine. Check out CLIP's and ) to learn more about CLIP.

The vqgan drawer uses Vector Quantized Generative Adversarial Networks, or simply VQGAN, by Esser et al. from the paper . The original code for VQGAN is in.

The clipdraw drawer uses by s, from the paper .

The diffusion drawer uses , and models are trained by . The code we use for diffusion is in .

The fft drawer is from the Aphantasia library by . The original code for fft drawer is in .

The pixel drawer is created by , the author of pixray.

The algorithm for generation is encoding prompts through CLIP, generating embeddings, which are compared against the output image for a loss that can backpropagate through the generation engine to learn a latent space that can minimize the loss. This minimal loss result is the output that most resembles the text which it was prompted.

The difference between the drawers lies between how the image itself is generated, and what the latent space of the model represents. It is possible to generate an image without an underlying model, but simply optimizing a tensor.

official repository
paper (Learning Transferable Visual Models From Natural Language Supervision
Taming Transformers for High-Resolution Image Synthesis
this repository
ClipDraw
Kevin Fran
CLIPDraw: Exploring Text-to-Drawing Synthesis through Language-Image Encoders
Diffusion models
Katherine Crowson
this repository
vadim epstein
this repository
dribnet