Python image Processing. Best librairies and framework for IA coding experimentation. A custom free Isadora GPT for Pyton image processing coding
-
Hello all. I want to experiment with image processing capabilities in Pythoner, but since my Python is very limited I'd like to use Chat GPT or other LLMs to generate the code from a prompt. Through this iterative process I'd like to create a series of video filters. So, I read some about the best Python libraries form image processing. Maybe we all can contribute and put them all here and refine thim prior to add them to the plug in page on Troikatronix website.
More resources on Geeksforgeeks with a nice Computer vision tutorial for python
So far this is what I have understood.
1) VENV (virtual environments) must contain in themselves the libraries or packages.
Question Does this means that I might end up with multiple virtual environments that might contain mumpy for instance ?
2) some are incompatible between them (PIL and OPEN CV because of a reverse order of the RGB values in the 2 packages)
3) Thanks to Pythoner # iz_input 1 "value 1" for inputs and # iz_output 1 "value 1" for outputs I can interactively control some parameters in the image processing and also have multiple output choices.
4) Chat GPT can create and comment code of image processing with all of those libraries.
With others we could contribute to a custom GPT I can create with all the information regarding isador a
Here are some of the more used and common libraries
OpenCV
Scikit-Image
SciPy
Pillow/PIL
NumPy
Mahotas
SimpleITK
Pgmagick
ImageIO
Python Image Library (PIL/Pillow)
Matplotlib
mahotasIt would be nice to share here or somewhere else tips for experimenting new image processing filters produced also with a custom GPT whose context is Isadora Manual plus pointers to the various image python processing libraries.
Just a thought
-
Hey there,
I am going into a deep dive over in TouchDesigner land to try and get good results from dotSimulate's implementation of realtime Image Diffusion with his "Stream DiffusionTD" project.
So far I've been able to get it to work on my M3 MacBook Pro @ ~4-7fps and on a Windows computer ~18-20 fps with an Nvidia graphics card.
I'm still very new to all of this but I think that I can confidently answer at least your first question "Question Does this means that I might end up with multiple virtual environments that might contain mumpy for instance ?":
1) - Yes, you might end up with multiple VENV(ironments) for different instances of something you are working on. I have ended up with between over 5 different attempted installations of StreamDiffusion on my computer as I've tried to get it to work. I just added a new installation recently because there was an update from DotSimulate and I didn't want to accidentally break my previous installation so I just made a new one and now have both in a working state.
In terms of all the common libraries. . . . I know that NumPy was one of the ones I had to struggle with getting the correct version installed of to make this work . . .
Here is a video describing the install process for StreamDiffusionTD: