[ANSWERED] Waterfall (particles) simulation
-
I did a second quick test using Mark Coniglio's video of the hand with a black background and your patch works incredible!
What you would have to think about/achieve is in an installation situation how to track bodies efficiently without having black backgrounds and defined shiny objects like in Mark's video. With that we can acomplish to start developing immersive situations with Isadora!!
Best, Maxi-RIL
-
Previously, I have used several methods for isolating performers. A depth camera (Kinect etc.) video feed because this isolates the human figure by calibrating a depth plane/distance, i.e. OpenNi. However, you can also use the 'Difference' video actor to isolate just the moving elements of a live video input. There are also thermal Imaging cameras (I have an old flir sr6).
Best Wishes
Russell
-
@bonemap the Difference Actor made the "difference" ..works great !
ps: How do I insert the video right here so you can see it?
-
Great to hear what a difference makes!
If you want to share a video you can try one of the following:
Upload to YouTube or Vimeo, and then paste the link here.
Take a screen grab with a gif-making tool like Giphy Capture and drop the .gif file into the forum thread (if you do this, you will need to keep the .gif file to under 3 mb, or it will not upload)
Best wishes,
Russell
-
.
-
-
-
Dear @bonemap
Thanks to your guide I was able to learn a lot about 3D models and generate different systems, including this one that goes in a different direction compared to the original examples. https://www.instagram.com/reel/C4jL42CvyQY/?igsh=M3BwZzNwZnZheHk4
My specific question now is the following: do you think it is possible to develop a system of particles/3D models but for them to be still and shake/move only when someone moves or passes by? That is to say, leaving the "waterfall" type example, of continuous movement (ascending or descending). A system that fills the entire screen but remains still until someone moves or passes by
something like this ? https://www.instagram.com/p/C5V9L2qr3Nf/?igsh=bHY5aXFqYm9zYWxwThanks a lot !
Best,
Maxi-RIL
-
Hi,
I hope you are well and in good spirits. Of course, you can use Isadora for a similar style of interactive display. In your example by the Japanese artist there is the appearance of a lot of particles in a thick patterning. Consequently, considerations around patch efficiency will likely be critical. Alternatively, consider a layered approach, for example, recording a particle scene to video and then compositing this video behind your interactive particles in a new scene. In this way you increase the visual quantity of particles but the real-time interactivity is optimised to a top image layer. You can also do calculations to determine the maximum number of particles that can be present without affecting your frame rate. Use the ‘Performance’ watcher module to help make the calculations based on what is going into your particle system's frequency and life span inputs. You may have noticed that the number of particles input is not dynamic, and resetting this will kill all currently active instances. So, it is a setting that needs to be calculated and set at the start.
Regarding your primary question, numerous exciting and dynamic ways exist to determine and control the spatial placement of your particles in the 3D viewport of your scene. This flexibility allows for creative experimentation and can enhance the interactivity of your display. Here are some examples I have shared with the Isadora User Group on Facebook:
https://m.facebook.com/video.p...
here is a demonstration patch for that: demo-particles-04.zip
https://m.facebook.com/video.php/?video_id=2138613486192707
These two examples use an external source for the x, y, and z positioning data for 1: a grid and 2: a sphere. These data sets were generated using Meshlab software (open source and free). Alternatively, the distribution of particles can be randomly generated by wave generator modules set to random. You will want to spread the particles in confined distances along your x and y-axis. Both particle systems are dynamic using the gravity field settings (that you already know about).
Once you've set up the spatial distribution of your particles, the next step is to make the gravity field parameters inside your Isadora patch respond to the tracking system. This is a key aspect of the interactivity of your system. The options for this include a camera-based vision system like Isadora’s blob tracking eyes++ or potentially OpenNI depth imaging. In the example video by the Japanese artist, you can see the camera pressed against the bottom of the shopfront glass and a short stem of wires leading to the bottom edge, indicating the use of a tracking system.
After all, it is a comparatively simple interactive system with just the passing motion of human movement to consider. How would it respond to someone dancing into it? And you have to consider who it is for; the passerby appears uninterested, but the camera documenting as a 'witness' is the audience in this case.
Best wishes
Russell
-
@bonemap thanks for that fast response !
What do you mean by use the ‘Performance’ watcher module ? And how to achieve this:You can also do calculations to determine the maximum number of particles that can be present without affecting your frame rate.
thanks again !
Best,
Maxi-RIL
-
Hello,
For best results when working with particles you can adjust 'frame rate' and 'service task' properties in the Isadora settings (menu: Isadora/Settings).
Set preferences that will be best for the capacity of your computer. For example: Target Frame Rate - 30 FPS and General Service Tasks - 5x Per Frame.
Once you have made your setting enter the same values into the 'Calculate optimised Pulse triggers and Particle Count' User actor. That is included in the demonstration patch available here.
This will calculate the available frequency range for your particle parameters based on Isadora Preference settings.
So...
1/ The pararmeter for the Frames Per Second is first set in the Isadora Preferences.
2/ The parameter for General Service Tasks is first set in the Isadora Preferences.
Calculate the efficient number of particles based on the Pulse trigger frequency and total life span of the particle.
3/ set Fade-in time
4/ set Hold time
5/ set Fade-out time
RESET the user actor.
Keep an eye on the LOAD rating and if it goes too high reduce the settings in the Isadora Preferences and reenter the new settings accordingly.
You will still achieve an acceptable particle effect at Target Frame Rate 24 FPS and General Service Tasks - 12x Per Frame. If your computer is struggling try Target Frame Rate 15 FPS and General Service Tasks 15x Per Frame.… be kind to your computer!
-
Dear Russell, I hope you're doing well. I'm working again with the patch you shared about 3D model particles, and it works really well!
My question is the following: starting from this "cascade" of particles that respond to movement, how could I introduce/add new 3D models (object input) or textures (Texture Map input) gradually? Because when modifying the texture or object, all the particles change instantly (it's reasonable), but I want to be able to mix or merge the old ones with the new ones in a more organic way, respecting the "lifecycle" of each one. I imagine a possible solution using two 3D Model Actors simultaneously. But perhaps there is another, more efficient and organic solution.
I hope my question is understood. Thank you very much.
Best,
Maxi Isadora Latin Network -
Hi Maxi,
here is a demonstration set of how to achieve your suggested composition.
The method is to include a number of 3d models into a single 3ds file and then separate the models using the 'group index' parameter of the '3d Model Particles' module.
Best Wishes
Russell
-
@bonemap thanks a lot for your super fast response!
I see your aproach, and its super cool.
But what about if I want to have individual control over those models. For example, start with a sphere and at a certain point move to a cube, and so on... maintaining the idea of a gradual -but independent- transformation...or I can think of a sequence of letters (from A to Z)
I hope I'm clear.
Best,
Maxi -
Hi Maxi,
You might be close to the limitations of the the '3D Model Particles' with this concept of 'organic transformation'. There is a way to simulate transformation of the 3d model itself and I have prepared the attached demonstration for you.
-
I don't know how to apply it to this case, but the Texture remains attached to particles (haven't tested 3d model sorry) when new models are created. So you can control the ratio of each texture used. So in the case of the alphabet, you could start with all A's then slowly replace A with B, by switching the col/row of the texture for each created particle at a defined ratio that changes over time.
-
-
@dusx Thanks Dusx!
What's not clear to me in your aproach is whether the letters should be used as textures or as 3D models themselves?
Best,
Maxi
-
@ril said:
letters should be used as textures or as 3D models themselves
This works in the 3d Particles actor as textures... so I think it will be the same in the 3d models particles actor. Since the texture image doesn't change (only the section used) the image is retained per particle. If you combine setting the texture row/col (so your texture is made up of rects of images) and you have a series of these that you input over time, you can create a series of different animations. One animation for ever col/row texture cell.
But again, I haven't recently tested with the 3D model particles actor.
-