• Products
    • Isadora
    • Get It
    • ADD-ONS
    • IzzyCast
    • Get It
  • Forum
  • Help
  • Werkstatt
  • Newsletter
  • Impressum
  • Dsgvo
  • Press
  • Isadora
  • Get It
  • ADD-ONS
  • IzzyCast
  • Get It
  • Press
  • Dsgvo
  • Impressum

Navigation

    • Register
    • Login
    • Search
    • Categories
    • Recent
    • Popular
    • Tags

    Projecting onto a mannequin head

    How To... ?
    8
    21
    10628
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • Michel
      Michel Izzy Guru last edited by

      @Skulpture

      Last year we had this installation at the train station where I live, it was installed in a waiting-room: http://glaserkunz.net/

      It seems they only do this if you go on selected works.

      Best Michel

      Michel Weber | www.filmprojekt.ch | rMBP (2019) i9, 16gig, AMD 5500M 8 GB, OS X 10.15 | located in Winterthur Switzerland.

      1 Reply Last reply Reply Quote 0
      • Skulpture
        Skulpture Izzy Guru last edited by

        Thank's all.

        Graham Thorne | www.grahamthorne.co.uk
        RIG 1: Custom-built PC: Windows 11. Ryzen 7 7700X, RTX3080, 32G DDR5 RAM. 2 x m.2.
        RIG 2: Laptop Dell G15: Windows 11, Intel i9 12th Gen. RTX3070ti, 16G RAM (DDR5), 2 x NVME M.2 SSD.
        RIG 3: Apple Laptop: rMBP i7, 8gig RAM 256 SSD, HD, OS X 10.12.12

        1 Reply Last reply Reply Quote 0
        • Fred
          Fred last edited by

          There are a bunch of live facial mapping tools around, most operate on a similar principle. They start by using a standardised mesh to describe a facial structure, a corresponding mesh is extracted from the source face and the destination face through feature tracking. They then interpolate between the source and destination meshes and the live video is bound to the mesh. This gives a pretty accurate result and then it does not matter what position the source and destination videos are, you can switch between them because they are locked )as long as the features are being tracked), the next part to make it all nice is to feather the edges of the mesh to make a smooth blend on the destination. As we there is no face tracker in Isadora it is a little difficult to implement without making a custom plugin and at the moment the SDK is well behind the current release. Here is an example made with OF that you could convert to inputing and outputting syphon and comte controlling through OSC. Here is an example of the output that uses the above code- in fact there is an example there that does everything in the video.

          Having said that if you have a very still recording of the source head (which looks quite unnatural - it seems this is the odd thing about the way it was done in the link Michael posted), and the destination does not move then it is a pretty simple exercise just tweaking the mask and mapping of the image.

          http://www.fredrodrigues.net/
          https://github.com/fred-dev
          OSX 13.6.4 (22G513) MBP 2019 16" 2.3 GHz 8-Core i9, Radeon Pro 5500M 8 GB, 32g RAM
          Windows 10 7700K, GTX 1080ti, 32g RAM, 2tb raided SSD

          1 Reply Last reply Reply Quote 0
          • Marci
            Marci last edited by

            Hmmm.... Faceshift would be the perfect commercial tool for the job.

            rMBP 11,3 (mOS 10.13) / rMBP 11,4 (mOS 10.14) / 3x Kinect + Leap / TH2Go
            Warning: autistic - may come across rather blunt and lacking in humour!

            1 Reply Last reply Reply Quote 0
            • Skulpture
              Skulpture Izzy Guru last edited by

              I masked out a basic face and projected it onto the head - worked pretty well with hardly any effort.

              Graham Thorne | www.grahamthorne.co.uk
              RIG 1: Custom-built PC: Windows 11. Ryzen 7 7700X, RTX3080, 32G DDR5 RAM. 2 x m.2.
              RIG 2: Laptop Dell G15: Windows 11, Intel i9 12th Gen. RTX3070ti, 16G RAM (DDR5), 2 x NVME M.2 SSD.
              RIG 3: Apple Laptop: rMBP i7, 8gig RAM 256 SSD, HD, OS X 10.12.12

              1 Reply Last reply Reply Quote 0
              • DusX
                DusX Tech Staff last edited by

                I have wanted to try this since I have a number of maniquins here.. and I saw the Gaultier exibit, it was very inspiring. Haven't had a chance yet.. Glad to here its working well for you without too much trouble.

                Troikatronix Technical Support

                • New Support Ticket Link: https://support.troikatronix.com/support/tickets/new
                • My Add-ons: https://troikatronix.com/add-ons/?u=dusx
                • Profession Services: https://support.troikatronix.com/support/solutions/articles/13000109444-professional-services

                Running: Win 11 64bit, i7, M.2 PCIe SSD's, 32gb DDR4, nVidia GTX 4070 | located in Ontario Canada.

                1 Reply Last reply Reply Quote 0
                • Michel
                  Michel Izzy Guru last edited by

                  @Marci

                  Faceshift Studio is NOT available anymore, rumors say Apple has bought the company. They extendet our license one more time and it will end on April 2016.

                  Michel Weber | www.filmprojekt.ch | rMBP (2019) i9, 16gig, AMD 5500M 8 GB, OS X 10.15 | located in Winterthur Switzerland.

                  1 Reply Last reply Reply Quote 0
                  • DusX
                    DusX Tech Staff last edited by

                    @Fred

                    How would the OF work if projecting onto a smooth (egg) maniquin head/face? I suppose it would require some form of registration... perhaps it could be input manually as a single XYZ

                    Troikatronix Technical Support

                    • New Support Ticket Link: https://support.troikatronix.com/support/tickets/new
                    • My Add-ons: https://troikatronix.com/add-ons/?u=dusx
                    • Profession Services: https://support.troikatronix.com/support/solutions/articles/13000109444-professional-services

                    Running: Win 11 64bit, i7, M.2 PCIe SSD's, 32gb DDR4, nVidia GTX 4070 | located in Ontario Canada.

                    1 Reply Last reply Reply Quote 0
                    • Skulpture
                      Skulpture Izzy Guru last edited by

                      All I have done is grab a picture, mask out the face and warp into on the mannequin head. Nothing special at all. It all lined up pretty well - but I needed a picture that was perfectly front on.

                      I just grabbed a picture of my wife from off her facebook page - she wasn't impressed! haha.
                      Something *really* obvious but simple hadn't thought about was.... The mannequin's lips are closed. So when I replace a picture with a video talking - the physical 3D head wont move it lips. So this could look strange. I may need a mannequin with no lops - just a flat surface. 
                      Which is exactly what they have done here @Michel http://glaserkunz.net/ so thanks for that link.
                      I also published the X and Y for two warp points on her lips and linked it to a sound level watcher haha - made her mouth move. It wasn't perfect but looked funny. 

                      Graham Thorne | www.grahamthorne.co.uk
                      RIG 1: Custom-built PC: Windows 11. Ryzen 7 7700X, RTX3080, 32G DDR5 RAM. 2 x m.2.
                      RIG 2: Laptop Dell G15: Windows 11, Intel i9 12th Gen. RTX3070ti, 16G RAM (DDR5), 2 x NVME M.2 SSD.
                      RIG 3: Apple Laptop: rMBP i7, 8gig RAM 256 SSD, HD, OS X 10.12.12

                      1 Reply Last reply Reply Quote 0
                      • Fred
                        Fred last edited by

                        @DusX I would start with a still of a head and map it to the mannequin. I would then use a video of a talking head and map it to the still image that was used for mapping. This means the face from the live or recorded video (where the head can move and the tracking can follow it) would be extracted and then matched to the features of the head used for the basis of the mapping to the mannequin. This would mean that pretty much any video of a head (live or recorded) could be used and it would map directly to the mannequin correctly every time.

                        http://www.fredrodrigues.net/
                        https://github.com/fred-dev
                        OSX 13.6.4 (22G513) MBP 2019 16" 2.3 GHz 8-Core i9, Radeon Pro 5500M 8 GB, 32g RAM
                        Windows 10 7700K, GTX 1080ti, 32g RAM, 2tb raided SSD

                        1 Reply Last reply Reply Quote 0
                        • DusX
                          DusX Tech Staff last edited by

                          @Fred

                          Good call, that makes perfect sense.

                          Troikatronix Technical Support

                          • New Support Ticket Link: https://support.troikatronix.com/support/tickets/new
                          • My Add-ons: https://troikatronix.com/add-ons/?u=dusx
                          • Profession Services: https://support.troikatronix.com/support/solutions/articles/13000109444-professional-services

                          Running: Win 11 64bit, i7, M.2 PCIe SSD's, 32gb DDR4, nVidia GTX 4070 | located in Ontario Canada.

                          1 Reply Last reply Reply Quote 0
                          • Skulpture
                            Skulpture Izzy Guru last edited by

                            That's what my plan was/is. :) Still image and then think of video.

                            Graham Thorne | www.grahamthorne.co.uk
                            RIG 1: Custom-built PC: Windows 11. Ryzen 7 7700X, RTX3080, 32G DDR5 RAM. 2 x m.2.
                            RIG 2: Laptop Dell G15: Windows 11, Intel i9 12th Gen. RTX3070ti, 16G RAM (DDR5), 2 x NVME M.2 SSD.
                            RIG 3: Apple Laptop: rMBP i7, 8gig RAM 256 SSD, HD, OS X 10.12.12

                            1 Reply Last reply Reply Quote 0
                            • Skulpture
                              Skulpture Izzy Guru last edited by

                              Hi @Fred.

                              Been looking at the link above. Downloaded all the files but then get lost in Xcode.
                              Just get tons of errors. I literally have no idea how to use Xcode. 
                              Where can I learn to solve these errors - I cant even think what to google... Apart from paying to go on a course maybe?

                              d0341e-screen-shot-2015-10-27-at-11.51.30.png

                              Graham Thorne | www.grahamthorne.co.uk
                              RIG 1: Custom-built PC: Windows 11. Ryzen 7 7700X, RTX3080, 32G DDR5 RAM. 2 x m.2.
                              RIG 2: Laptop Dell G15: Windows 11, Intel i9 12th Gen. RTX3070ti, 16G RAM (DDR5), 2 x NVME M.2 SSD.
                              RIG 3: Apple Laptop: rMBP i7, 8gig RAM 256 SSD, HD, OS X 10.12.12

                              1 Reply Last reply Reply Quote 0
                              • Fred
                                Fred last edited by

                                Ok, there are some good instructions for running OF, but briefly you need to download openframeworks from here https://github.com/openframeworks/openFrameworks, download the ofxCv addon from here https://github.com/kylemcdonald/ofxCv/tree/develop (note it is the develop branch so it matches with the branch of openframeworks. The addon needs to go in the addons folder and make sure it is a clean name, not with master- or develop in the folder name. You then get an error from not following this instruction

                                After downloading or cloning ofxFaceTracker, you need to make a copy of the `/libs/Tracker/model/`directory in `bin/data/model/` of each example. You can do this by hand, or `python update-projects.py` will take care of this for you.
                                the of setup guide here http://openframeworks.cc/setup/xcode/ gives some good starters. and there are some basic tutorials that are really good here http://openframeworks.cc/tutorials/
                                Fred

                                http://www.fredrodrigues.net/
                                https://github.com/fred-dev
                                OSX 13.6.4 (22G513) MBP 2019 16" 2.3 GHz 8-Core i9, Radeon Pro 5500M 8 GB, 32g RAM
                                Windows 10 7700K, GTX 1080ti, 32g RAM, 2tb raided SSD

                                1 Reply Last reply Reply Quote 0
                                • Skulpture
                                  Skulpture Izzy Guru last edited by

                                  @Fred I could kiss you.

                                  Thank you for taking the time to explain that. I appreciate it. 

                                  Graham Thorne | www.grahamthorne.co.uk
                                  RIG 1: Custom-built PC: Windows 11. Ryzen 7 7700X, RTX3080, 32G DDR5 RAM. 2 x m.2.
                                  RIG 2: Laptop Dell G15: Windows 11, Intel i9 12th Gen. RTX3070ti, 16G RAM (DDR5), 2 x NVME M.2 SSD.
                                  RIG 3: Apple Laptop: rMBP i7, 8gig RAM 256 SSD, HD, OS X 10.12.12

                                  1 Reply Last reply Reply Quote 0
                                  • First post
                                    Last post