• Products
    • Isadora
    • Get It
    • ADD-ONS
    • IzzyCast
    • Get It
  • Forum
  • Help
  • Werkstatt
  • Newsletter
  • Impressum
  • Dsgvo
  • Press
  • Isadora
  • Get It
  • ADD-ONS
  • IzzyCast
  • Get It
  • Press
  • Dsgvo
  • Impressum
FORUM

Navigation

    • Register
    • Login
    • Search
    • Categories
    • Recent
    • Popular
    • Tags

    [LOGGED] Keying Head & Shoulders like in zoom & skype

    Feature Requests
    background virtual virtual theatre zoom skype
    8
    12
    2.0k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • Skulpture
      Skulpture Izzy Guru @liannemua last edited by Oct 19, 2020, 9:03 AM

      @liannemua said:

      <p>Hi, curious if this exists in Isadora, or if not, it would be great for live virtual theater. The ability to recognize & key out the background of talking head feeds imported live from Skype - like the virtual backgrounds in Zoom & Skype, but to replace the background with alpha. Thanks.</p>

       Give this a try: https://www.chromacam.me/ 

      Graham Thorne | www.grahamthorne.co.uk
      RIG 1: Custom-built PC: Windows 11. Ryzen 7 7700X, RTX3080, 32G DDR5 RAM. 2 x m.2.
      RIG 2: Laptop Dell G15: Windows 11, Intel i9 12th Gen. RTX3070ti, 16G RAM (DDR5), 2 x NVME M.2 SSD.
      RIG 3: Apple Laptop: rMBP i7, 8gig RAM 256 SSD, HD, OS X 10.12.12

      1 Reply Last reply Reply Quote 1
      • Woland
        Woland Tech Staff last edited by Woland Oct 19, 2020, 9:22 AM Oct 19, 2020, 9:15 AM

        While we're on the topic, does anyone know of any open-source, cross-platform tools for this? Having open-source code as a starting point would both decrease the difficulty, and increase the the likelihood, of being able to incorporate this as a native Isadora feature. 


        Also, I've logged this as a feature request.

        TroikaTronix Technical Support
        New Support Ticket: https://support.troikatronix.com/support/tickets/new
        Support Policy: https://support.troikatronix.com/support/solutions/articles/13000064762
        Add-Ons: https://troikatronix.com/add-ons/ & https://troikatronix.com/add-ons/?u=woland
        Professional Services: https://support.troikatronix.com/support/solutions/articles/13000109444

        | Isadora Version: all of them | Mac Pro (Late 2013), macOS 10.14.6, 3.5GHz 6-core, 1TB SSD, 64GB RAM, Dual AMD FirePro D700s |

        tomthebom mark 2 Replies Last reply Nov 14, 2020, 8:01 AM Reply Quote 0
        • Skulpture
          Skulpture Izzy Guru last edited by Oct 19, 2020, 9:16 AM

          Alos; https://www.xsplit.com/vcam 

          Graham Thorne | www.grahamthorne.co.uk
          RIG 1: Custom-built PC: Windows 11. Ryzen 7 7700X, RTX3080, 32G DDR5 RAM. 2 x m.2.
          RIG 2: Laptop Dell G15: Windows 11, Intel i9 12th Gen. RTX3070ti, 16G RAM (DDR5), 2 x NVME M.2 SSD.
          RIG 3: Apple Laptop: rMBP i7, 8gig RAM 256 SSD, HD, OS X 10.12.12

          1 Reply Last reply Reply Quote 0
          • tomthebom
            tomthebom @Woland last edited by Oct 20, 2020, 9:14 AM

            @Woland said:

            Also, I've logged this as a feature request.

            I couldn't agree more: I find it a "must-have" in Corona-times. I am just checking out  Skulptures tip:  https://www.chromacam.me/. 30 bucks for the full version and a very slow download are not very promising ;o(

            Izzy 3.2.6 ARM on MBP14'/2023/M2 Pro/ macOS 13.5

            1 Reply Last reply Reply Quote 0
            • Aolis
              Aolis last edited by Nov 12, 2020, 4:35 PM

              Just tried out the Chromacam - got it working in just a few moments. May be worth the cost if needed.

              late 2012 MacPro - Mojave 

              Media Artist & Teacher
              MacBook Pro, Apple M3 Max, 128 GB
              Sonoma 14.3.1

              1 Reply Last reply Reply Quote 0
              • Kathmandale
                Kathmandale @liannemua last edited by Nov 13, 2020, 9:28 AM

                @liannemua If you get your remote performers to set their virtual backgrounds to a pure green image then the keying in Isadora works perfectly. Even if their local laptops aren't up to it and they have to tick the 'I have a greenscreen' option in zoom you can really good results that way. It's how we made Airlock and are using it as a technique on other projects.

                To be honest, I actually prefer the results you get with the 'I have a greenscreen' option than the 'head and shoulders recognition' option. Zoom seems to do a pretty good job of keying out imperfect (or imperfectly lit) greenscreens (or green sheets, or blue walls, or whatever your performers can get in front of). If you then set their virtual background to an all green image then it's really easy to get the settings just right in Isadora. It also means you don't get that thing of hands, arms, hats, props or whole performers disappearing occasioanly.

                2014 MBP Mojave 10.14.6 OS with 16GB, 2.5Ghz i7 quad core, Intel Iris Pro 1536 & Geforce GT 750m 2GB - Izzy 3.0.8
                Gigabyte Brix Windows 10 with 32GB, i7-6700 quad core, 4GB GeForce GTX 950 - Izzy 3.0.8
                Based in Manchester, UK.

                1 Reply Last reply Reply Quote 1
                • mark
                  mark @Woland last edited by mark Nov 14, 2020, 8:03 AM Nov 14, 2020, 8:01 AM

                  @woland said:

                  While we're on the topic, does anyone know of any open-source, cross-platform tools for this?

                   I did some poking around. The algorithms to remove an arbitrary background all require training artificial intelligence systems using datasets of people in front of web cams to work. You can get a sense of the complexity by looking at this Background Matting GitHub project or this article where this person implements the background removal using Python and Tensorflow (AI) tools. 

                  So, what I'm trying to say here is that this is a major project that would require my entire attention. If there the program mentioned above works for $30, I'd say it's a reasonable cost given the how much work it would be to implement such a feature. I wish we had unlimited programming resources to take this on, but it's not realistic at the moment for us to do so.

                  Best Wishes,
                  Mark

                  Media Artist & Creator of Isadora
                  Macintosh SE-30, 32 Mb RAM, MacOS 7.6, Dual Floppy Drives

                  mark liminal_andy 2 Replies Last reply Nov 15, 2020, 8:07 PM Reply Quote 1
                  • mark
                    mark @mark last edited by mark Nov 14, 2020, 8:20 AM Nov 14, 2020, 8:19 AM

                    P.S. One further note:

                    This article from our friends at Touch Designer describes how you can use the Photo app in iOS to remove the background and send the image to a desktop computer. (The example is for Windows, but there is a "For Mac Users" section. They mention using Cam Twist but you should use our free Syphon Virtual Webcam to get Isadora's Syphon output into Zoom.)

                    However, this example uses NDI to capture the iPhone screen so there is going to be a substantial delay.

                    Best Wishes,
                    Mark

                    Media Artist & Creator of Isadora
                    Macintosh SE-30, 32 Mb RAM, MacOS 7.6, Dual Floppy Drives

                    1 Reply Last reply Reply Quote 0
                    • liminal_andy
                      liminal_andy @mark last edited by liminal_andy Nov 15, 2020, 8:28 PM Nov 15, 2020, 8:07 PM

                      @mark said:

                       I did some poking around. The algorithms to remove an arbitrary background all require training artificial intelligence systems using datasets of people in front of web cams to work. You can get a sense of the complexity by looking at this Background" class="redactor-linkify-object">https://github.com/senguptaumd... Matting GitHub project or this article where" class="redactor-linkify-object">https://elder.dev/posts/open-s... this person implements the background removal using Python and Tensorflow (AI) tools. 

                      In furtherance of this subject, I did come across an interesting project using the Tensor BodyPix, which is a popular framework for this type of work but not super helpful for us Izzy users. If I get some time, I am going to bolt on Spout / Syphon to this and try to make it cross platform, and maybe speed it up some if I can. I'm imagining you'd output a stage to the shared memory buffer, then select the stage in a console, and then it will post an alpha mask to another shared buffer. 

                      Would love to discuss this further as I do see it being helpful for online work. I worked a fair amount on soft body projection mapping / masking in pre-covid times, using pixel energy functions as heuristics to accelerate these detection algorithms. I find that for virtual shows, I am pulling out / modding more GLSL shaders than I normally do and thinking critically about compositing, and background segmentation is often a key part of that :)

                      Somehow, it all comes together.

                      Andy Carluccio
                      Zoom Video Communications, Inc.
                      www.liminalet.com

                      [R9 3900X, RTX 2080, 64GB DDR4 3600, Win 10, Izzy 3.0.8]
                      [...also a bunch of hackintoshes...]

                      liminal_andy 1 Reply Last reply Nov 17, 2020, 6:42 AM Reply Quote 0
                      • liminal_andy
                        liminal_andy @liminal_andy last edited by Nov 17, 2020, 6:42 AM

                        So the following is purely for fun in response to @mark 's post imagining how this would be done. I did follow up on this over the weekend and got something "working"

                        I heavily modified the project I mentioned earlier by manually rolling it over to the Tensorflow Lite c_api (a real pain!), then porting it to Windows, and feeding it the deeplabv3_257_mv_gpu.tflite model. To make it useful to Isadora, I dusted off / updated an openCV to Spout pipeline in c++ that I used a few years ago for some of my live projection masking programs, so now my prototype can receive an Isadora stage, run it on the model, and output the resulting mask to Spout again for Isadora to use with the Alpha Mask actor. 

                        My results:

                        Now obviously, this is insane to actually attempt for production purposes in its current form. I'm getting about 5fps (granted no GPU accelerated and I'm running in debug mode). I could slightly improve things by bouncing back the original Isadora stage on its own spout server, but this is just a proof of concept. In this state, it should be relatively easy to port to Mac/Syphon and add GPU acceleration on compatible systems for higher FPS and / or multiple instances for many performers. 


                        Again, just a fun weekend project but I found it very educational. 

                        Andy Carluccio
                        Zoom Video Communications, Inc.
                        www.liminalet.com

                        [R9 3900X, RTX 2080, 64GB DDR4 3600, Win 10, Izzy 3.0.8]
                        [...also a bunch of hackintoshes...]

                        1 Reply Last reply Reply Quote 1
                        11 out of 12
                        • First post
                          11/12
                          Last post