Pixray is an image generation system

Related tags

Deep Learningpixray
Overview

pixray

Alt text

Pixray is an image generation system. It combines previous ideas including:

pixray it itself a python library and command line utility, but is also friendly to running on line in Google Colab notebooks.

There is currently some documentation on options. Also checkout THE DEMO NOTEBOOKS or join in the discussion on discord.

Usage

Pixray can be run in Docker using Cog.

First, install Docker and Cog, then you can use cog run to run Pixray inside Docker. For example:

cog run python pixray.py --drawer=pixel --prompt=sunrise --output myfile.png
Comments
  • Implement a basic log for debugging

    Implement a basic log for debugging

    Saw an open issue about improving outputs so I took a stab at it. Didn't want to do too much as I saw you may already have some updates in mind regarding the file name / directory structure.

    Summary of changes:

    • Pixray will now take an optional parameter "--debug": a boolean value that indicates whether or not to output a debug log with the final output.
    • The debug log currently includes the settings used to generate an image. (More can be added later).
    • A reusable file path calculation function in utility class.
    • Add some unit tests.
    opened by sgallag-insta 18
  • Add overlay until option

    Add overlay until option

    Summary of changes:

    • Added --overlay_until option.
      • Takes an integer argument that is the number of iterations.
      • Default value is None.

    Tests can be run by executing python -m unittest tests/test_pixray.py from the main pixray directory.

    opened by sgallag-insta 10
  • BLIP loss

    BLIP loss

    I had to lower num_cuts when running.

        prompts="warrior. concept art. trending on artstation",
        drawer="super_resolution",
        size=[512, 512],
        num_cuts=8,
        quality="normal",
        learning_rate=0.1,
        init_image="human.jpg",
    
    opened by samedii 9
  • ImportError: cannot import name 'SimpleTokenizer' from 'tokenizer'

    ImportError: cannot import name 'SimpleTokenizer' from 'tokenizer'

    I keep getting this error...

    ImportError: cannot import name 'SimpleTokenizer' from 'tokenizer' (C:\Users\micro\anaconda3\envs\pixel\lib\site-packages\tokenizer_init_.py)

    opened by dillfrescott 7
  • Parse units in arguments

    Parse units in arguments

    Summary of changes:

    • Added a new parse_unit function that parses strings with units ("20 iterations", "50%", etc) to a raw iteration integer.
    • Refactored how parameters with pipes are handled slightly.
    • Overlay related arguments now use strings with units specified rather than integers.
    • Added test cases.

    Haven't had time to hook up the other arguments yet but I can continue with that.

    opened by sgallag-insta 7
  • Pixray not loading on chromebook with Lightspeed

    Pixray not loading on chromebook with Lightspeed

    Hey! I like Pixray, it can make some cool art. (better then me D:) However, on my Chromebook at school, it cannot load the Input or Output areas without a proxy. I'm using the Lightspeed blocker if that helps.

    opened by Erisfiregamer1 7
  • vdiff model is no longer available

    vdiff model is no longer available

    Hello! It looks like the model for vdiff isn't available at the URL currently in Pixray anymore. Attempting to use the vdiff drawer gives me this:

    (base) [email protected]:~/pixray$ python pixray.py --drawer=vdiff --prompts="an excessively fuzzy panda" --outdir panda
    Running with 30x1 = 30 cuts
    Using seed: 7387649654636579532
    Downloading models/yfcc_2.pth from https://v-diffusion.s3.us-west-2.amazonaws.com/yfcc_2.pth, please wait
    --2022-05-04 13:39:59--  https://v-diffusion.s3.us-west-2.amazonaws.com/yfcc_2.pth
    Resolving v-diffusion.s3.us-west-2.amazonaws.com (v-diffusion.s3.us-west-2.amazonaws.com)... 52.218.228.113
    Connecting to v-diffusion.s3.us-west-2.amazonaws.com (v-diffusion.s3.us-west-2.amazonaws.com)|52.218.228.113|:443... connected.
    HTTP request sent, awaiting response... 404 Not Found
    2022-05-04 13:39:59 ERROR 404: Not Found.
    
    Ignoring non-zero exit:  b''
    Traceback (most recent call last):
      File "/home/fox/pixray/pixray.py", line 2135, in <module>
        main()
      File "/home/fox/pixray/pixray.py", line 2129, in main
        do_init(settings)
      File "/home/fox/pixray/pixray.py", line 613, in do_init
        drawer.load_model(args, device)
      File "/home/fox/pixray/vdiff.py", line 85, in load_model
        model.load_state_dict(torch.load(checkpoint, map_location='cpu'))
      File "/home/fox/miniconda3/lib/python3.9/site-packages/torch/serialization.py", line 608, in load
        return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
      File "/home/fox/miniconda3/lib/python3.9/site-packages/torch/serialization.py", line 777, in _legacy_load
        magic_number = pickle_module.load(f, **pickle_load_args)
    EOFError: Ran out of input
    

    And spits out a 0 byte vfcc_2.pth file in the models folder. Attempting to follow the link it gives also yields a 404 page saying the bucket no longer exists.

    opened by graytFox 5
  • update predictors to use Cog's new Pydantic API

    update predictors to use Cog's new Pydantic API

    Hey @dribnet 👋🏼

    This PR updates pixray's Cog predictor classes to be compatible with Cog's new Python API.

    Cog v0.1.0 has a new predictor API that makes use of Python's built-in type annotations to declare input and output types. The new API also has a different way of declaring inputs based on pydantic, a python package for validating Python models. Instead of using the @cog.input decorators, inputs are now declared inline as parameters to the predict() method.

    There's a bunch of other useful foundational stuff in this new release of Cog that gets us closer to having a standardize type system that leans on JSON Schema and OpenAPI instead of re-inventing our own thing. For more details, see the release notes here: https://github.com/replicate/cog/releases/tag/v0.1.0

    Process

    Here's the process I followed to set things up, make changes, and test:

    1. created a new model https://replicate.com/zeke/pydantic-pixray
    2. forked this repo, recursively cloned it, and cog pushed it to zeke/pydantic-pixray using "old cog" (0.0.x)
    3. verified that my unchanged fork of pixray worked by running some predictions
    4. upgraded my local Cog version to the latest, 0.1.1
    5. updated all the Predictors in cogrun.py to use cog.BasePredictor, cog.Input, cog.Path etc.
    6. set the image field in cog.yaml to publish to my own copy of the model and the predict field to cogrun.py:PixrayVdiff
    7. published using cog push
    8. ran cog predict from a GCP image with a GPU. See https://gist.github.com/zeke/a36c059bebb751fb21b26c1d14ed1996

    Progress

    I am now able to cog build, cog push, and cog predict the changes herein using the latest version of Cog, but still hitting a few snags:

    • The PixrayVdiff predictor (which I think corresponds to https://replicate.com/pixray/text2image) produces output, but it's yielding the same image over and over. See https://gist.github.com/zeke/a36c059bebb751fb21b26c1d14ed1996
    • Some of the existing predictors accept a kwargs argument, but the new version of Cog has a strict list of allowed input types. In order to be compatible with Cog's new type checking stuff, these predictors that accept arbitrary keyword arguments will need to be expanded to explicitly list all the arguments and their types.

    Next steps

    @dribnet hopefully this gives you a head start for updating pixray to work with the new version of Cog. Let me know if this all makes sense, and if you need more help getting these changes shipped.

    opened by zeke 5
  • New option to load from yaml without using run.py

    New option to load from yaml without using run.py

    @dribnet Can you please check and merge this feature?

    It is meant to take a new argument: --config-file <path_to_yaml> without using run.pyscript. (Sorry for delays, this is my first PR on a repo I forked ;))

    See ya! And, btw, great job on pixray...

    opened by syllebra 5
  • Fast pixel drawer

    Fast pixel drawer

    Only supports "rectangular pixels".

    250it [00:31,  7.82it/s]
    vs.
    235it [02:22,  1.73it/s]
    

    Also uses a little less memory 6433MiB vs 8747MiB

    opened by samedii 4
  • Make apt-get update/install single line

    Make apt-get update/install single line

    This is a little Docker gotchya -- at some point the apt repositories will change, then apt-get install will fail because it's using the old cached output of update.

    We should make it harder to trip up on this in Cog, or document it, or something...

    opened by bfirsh 3
  • Python 3.8.5 torch missing version

    Python 3.8.5 torch missing version

    ERROR: Could not find a version that satisfies the requirement torch==1.9.0+cu102 (from versions: 1.8.1, 1.9.0, 1.9.1, 1.10.0, 1.10.1, 1.10.2, 1.11.0, 1.12.0, 1.12.1, 1.13.0)

    Python 3.8.5 is the oldest version available on miniconda.

    (3.8) [email protected] pixray % pip install -r requirements.txt
    Looking in links: https://download.pytorch.org/whl/torch_stable.html
    Collecting git+https://github.com/bfirsh/[email protected] (from -r requirements.txt (line 28))
      Cloning https://github.com/bfirsh/taming-transformers.git (to revision 7a6e64ee) to /private/var/folders/01/sn57hs8566145w17svn1c9780000gn/T/pip-req-build-zaqc64ss
      Running command git clone --filter=blob:none --quiet https://github.com/bfirsh/taming-transformers.git /private/var/folders/01/sn57hs8566145w17svn1c9780000gn/T/pip-req-build-zaqc64ss
      WARNING: Did not find branch or tag '7a6e64ee', assuming revision or ref.
      Running command git checkout -q 7a6e64ee
      Resolved https://github.com/bfirsh/taming-transformers.git to commit 7a6e64ee
      Preparing metadata (setup.py) ... done
    Collecting git+https://github.com/openai/CLIP (from -r requirements.txt (line 29))
      Cloning https://github.com/openai/CLIP to /private/var/folders/01/sn57hs8566145w17svn1c9780000gn/T/pip-req-build-wdkc5c31
      Running command git clone --filter=blob:none --quiet https://github.com/openai/CLIP /private/var/folders/01/sn57hs8566145w17svn1c9780000gn/T/pip-req-build-wdkc5c31
      Resolved https://github.com/openai/CLIP to commit d50d76daa670286dd6cacf3bcd80b5e4823fc8e1
      Preparing metadata (setup.py) ... done
    Collecting git+https://github.com/pvigier/[email protected] (from -r requirements.txt (line 30))
      Cloning https://github.com/pvigier/perlin-numpy (to revision 6f077f8) to /private/var/folders/01/sn57hs8566145w17svn1c9780000gn/T/pip-req-build-iyuy_gin
      Running command git clone --filter=blob:none --quiet https://github.com/pvigier/perlin-numpy /private/var/folders/01/sn57hs8566145w17svn1c9780000gn/T/pip-req-build-iyuy_gin
      WARNING: Did not find branch or tag '6f077f8', assuming revision or ref.
      Running command git checkout -q 6f077f8
      Resolved https://github.com/pvigier/perlin-numpy to commit 6f077f8
      Preparing metadata (setup.py) ... done
    Collecting git+https://github.com/fbcotter/pytorch_wavelets (from -r requirements.txt (line 46))
      Cloning https://github.com/fbcotter/pytorch_wavelets to /private/var/folders/01/sn57hs8566145w17svn1c9780000gn/T/pip-req-build-a0yl7grc
      Running command git clone --filter=blob:none --quiet https://github.com/fbcotter/pytorch_wavelets /private/var/folders/01/sn57hs8566145w17svn1c9780000gn/T/pip-req-build-a0yl7grc
      Resolved https://github.com/fbcotter/pytorch_wavelets to commit 9a0c507f04f43c5397e384bb6be8340169b2fd9a
      Preparing metadata (setup.py) ... done
    Collecting git+https://github.com/pixray/[email protected] (from -r requirements.txt (line 49))
      Cloning https://github.com/pixray/aphantasia (to revision 7e6b3bb) to /private/var/folders/01/sn57hs8566145w17svn1c9780000gn/T/pip-req-build-jbsijre8
      Running command git clone --filter=blob:none --quiet https://github.com/pixray/aphantasia /private/var/folders/01/sn57hs8566145w17svn1c9780000gn/T/pip-req-build-jbsijre8
      WARNING: Did not find branch or tag '7e6b3bb', assuming revision or ref.
      Running command git checkout -q 7e6b3bb
      Resolved https://github.com/pixray/aphantasia to commit 7e6b3bb
      Preparing metadata (setup.py) ... done
    ERROR: Could not find a version that satisfies the requirement torch==1.9.0+cu102 (from versions: 1.8.1, 1.9.0, 1.9.1, 1.10.0, 1.10.1, 1.10.2, 1.11.0, 1.12.0, 1.12.1, 1.13.0)
    ERROR: No matching distribution found for torch==1.9.0+cu102
    (3.8) [email protected] pixray % python --version
    Python 3.8.5
    
    
    
    opened by codymarcel 0
  • Installation error with `torch`

    Installation error with `torch`

    I am trying to install using the requirements.txt. However, I am getting this error:

    Collecting package metadata (current_repodata.json): ...working... done Solving environment: ...working... failed with initial frozen solve. Retrying with flexible solve. Collecting package metadata (repodata.json): ...working... done Solving environment: ...working... failed with initial frozen solve. Retrying with flexible solve.

    PackagesNotFoundError: The following packages are not available from current channels:

    • torch==1.9.0+cu102

    Current channels:

    • https://repo.anaconda.com/pkgs/main/win-64
    • https://repo.anaconda.com/pkgs/main/noarch
    • https://repo.anaconda.com/pkgs/r/win-64
    • https://repo.anaconda.com/pkgs/r/noarch
    • https://repo.anaconda.com/pkgs/msys2/win-64
    • https://repo.anaconda.com/pkgs/msys2/noarch

    To search for alternate channels that may provide the conda package you're looking for, navigate to

    https://anaconda.org
    

    My computer is windows 10, and created a Python 3.8 virtual env using Conda. Here is my Cuda settings:

    import torch
    import tensorflow as tf
    import tensorflow.keras as ks
    
    print(tf)
    print(ks)
    print(torch.cuda.is_available())
    print(torch.version.cuda)
    print(torch.backends.cudnn.version())
    print('//////////')
    
    device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
    X_train = torch.FloatTensor([0., 1., 2.])
    X_train = X_train.to(device)
    
    
    print(X_train.is_cuda)
    print(torch.cuda.current_device())
    print(torch.cuda.device_count())
    print(torch.cuda.get_device_name(0))
    

    Output

    <module 'tensorflow' from 'C:\Venv\conda_python3_8_tensorflow_gen_art\lib\site-packages\tensorflow\init.py'> <module 'tensorflow.keras' from 'C:\Venv\conda_python3_8_tensorflow_gen_art\lib\site-packages\tensorflow\keras\init.py'> True 11.3 8302 ////////// True 0 1 NVIDIA GeForce GTX 1070 Ti

    Process finished with exit code 0

    opened by kaionwong 3
  • Replace wget with requests for windows compatibility

    Replace wget with requests for windows compatibility

    The file download utility has been changed from a linux utility to a python library. It downloads at a good speed without too much RAM or CPU usage. It requires a new import, unfortunately, but it is a common one.

    This is tested to be working on my local Windows 10 machine and a Google Colab instance.

    Fixes #47 Partial for #76

    opened by cjpeterson 0
Owner
pixray
pixray
CenterFace(size of 7.3MB) is a practical anchor-free face detection and alignment method for edge devices.

CenterFace Introduce CenterFace(size of 7.3MB) is a practical anchor-free face detection and alignment method for edge devices. Recent Update 2019.09.

StarClouds 1.2k Dec 21, 2022
PyTorch implementation of paper "IBRNet: Learning Multi-View Image-Based Rendering", CVPR 2021.

IBRNet: Learning Multi-View Image-Based Rendering PyTorch implementation of paper "IBRNet: Learning Multi-View Image-Based Rendering", CVPR 2021. IBRN

Google Interns 371 Jan 03, 2023
A clean and scalable template to kickstart your deep learning project 🚀 ⚡ 🔥

Lightning-Hydra-Template A clean and scalable template to kickstart your deep learning project 🚀 ⚡ 🔥 Click on Use this template to initialize new re

Hyunsoo Cho 1 Dec 20, 2021
Official PyTorch implementation of BlobGAN: Spatially Disentangled Scene Representations

BlobGAN: Spatially Disentangled Scene Representations Official PyTorch Implementation Paper | Project Page | Video | Interactive Demo BlobGAN.mp4 This

148 Dec 29, 2022
This is an official implementation of the High-Resolution Transformer for Dense Prediction.

High-Resolution Transformer for Dense Prediction Introduction This is the official implementation of High-Resolution Transformer (HRT). We present a H

HRNet 403 Dec 13, 2022
Reproduces ResNet-V3 with pytorch

ResNeXt.pytorch Reproduces ResNet-V3 (Aggregated Residual Transformations for Deep Neural Networks) with pytorch. Tried on pytorch 1.6 Trains on Cifar

Pau Rodriguez 481 Dec 23, 2022
This is the official implement of paper "ActionCLIP: A New Paradigm for Action Recognition"

This is an official pytorch implementation of ActionCLIP: A New Paradigm for Video Action Recognition [arXiv] Overview Content Prerequisites Data Prep

268 Jan 09, 2023
Multi-Scale Geometric Consistency Guided Multi-View Stereo

ACMM [News] The code for ACMH is released!!! [News] The code for ACMP is released!!! About ACMM is a multi-scale geometric consistency guided multi-vi

Qingshan Xu 118 Jan 04, 2023
pytorch, hand(object) detect ,yolo v5,手检测

YOLO V5 物体检测,包括手部检测。 项目介绍 手部检测 手部检测示例如下 : 视频示例: 项目配置 作者开发环境: Python 3.7 PyTorch = 1.5.1 数据集 手部检测数据集 该项目数据集采用 TV-Hand 和 COCO-Hand (COCO-Hand-Big 部分) 进

Eric.Lee 11 Dec 20, 2022
This is a code repository for the paper "Graph Auto-Encoders for Financial Clustering".

Repository for the paper "Graph Auto-Encoders for Financial Clustering" Requirements Python 3.6 torch torch_geometric Instructions This is a simple c

Edward Turner 1 Dec 02, 2021
Ἀνατομή is a PyTorch library to analyze representation of neural networks

Ἀνατομή is a PyTorch library to analyze representation of neural networks

Ryuichiro Hataya 50 Dec 05, 2022
The Pytorch implementation for "Video-Text Pre-training with Learned Regions"

Region_Learner The Pytorch implementation for "Video-Text Pre-training with Learned Regions" (arxiv) We are still cleaning up the code further and pre

Rui Yan 0 Mar 20, 2022
Few-shot Relation Extraction via Bayesian Meta-learning on Relation Graphs

Few-shot Relation Extraction via Bayesian Meta-learning on Relation Graphs This is an implemetation of the paper Few-shot Relation Extraction via Baye

MilaGraph 36 Nov 22, 2022
This is the official implementation of VaxNeRF (Voxel-Accelearated NeRF).

VaxNeRF Paper | Google Colab This is the official implementation of VaxNeRF (Voxel-Accelearated NeRF). This codebase is implemented using JAX, buildin

naruya 132 Nov 21, 2022
Hyperbolic Procrustes Analysis Using Riemannian Geometry

Hyperbolic Procrustes Analysis Using Riemannian Geometry The code in this repository creates the figures presented in this article: Please notice that

Ronen Talmon's Lab 2 Jan 08, 2023
A compendium of useful, interesting, inspirational usage of pandas functions, each example will be an ipynb file

Pandas_by_examples A compendium of useful/interesting/inspirational usage of pandas functions, each example will be an ipynb file What is this reposit

Guangyuan(Frank) Li 32 Nov 20, 2022
[CVPR 2022] Official Pytorch code for OW-DETR: Open-world Detection Transformer

OW-DETR: Open-world Detection Transformer (CVPR 2022) [Paper] Akshita Gupta*, Sanath Narayan*, K J Joseph, Salman Khan, Fahad Shahbaz Khan, Mubarak Sh

Akshita Gupta 127 Dec 27, 2022
A python module for configuration of block devices

Blivet is a python module for system storage configuration. CI status Licence See COPYING Installation From Fedora repositories Blivet is available in

78 Dec 14, 2022
Training neural models with structured signals.

Neural Structured Learning in TensorFlow Neural Structured Learning (NSL) is a new learning paradigm to train neural networks by leveraging structured

955 Jan 02, 2023
QuickAI is a Python library that makes it extremely easy to experiment with state-of-the-art Machine Learning models.

QuickAI is a Python library that makes it extremely easy to experiment with state-of-the-art Machine Learning models.

152 Jan 02, 2023