Oculus and Isadora
-
Okay so I am trying to integrate the oculus into Isadora for a type of AR experience I am working on. The headset is emidiatly seen and gets picture but its just as if it is a regular monitor so you only see part of it in the headset if you close ine eye or the other. Is there any way to get the output from Isadora to be split the nessisary way for it to show in the oculus correctly?
-
How does the oculus require the image?
Is it split as a right and left image?Do you send it 1 or 2 image streams? -
I am trying to learn more about that. I just got it and am reaching out to the oculus community as well for more info. From what I have seen programs will ether duplicate the same image so there is one for each eye, (which is something I'm going to try do emulate by moving the projector peramiters around) or and the better of the two it is still a similar image side by side but the shading is different on the two depending on the eye it is for. Again I don't have the best knowledge of this yet. Thanks again for the help!
-
just a tip: in every stereoscopic display frames sinctonization is foundamental. Do not generate two different videos, if you generate une with booth eyes you be sure about sync. In particulat (not sure but it's happened to me) isadora tend to jump a frame somewere if workload is high, if you have just one video if it happen it happen to boot eyes
-
All I can find out is:
"the Rift runs at 2160×1200 at 90Hz split over dual displays, consuming 233 million pixels per second."Source: https://www.oculus.com/en-us/blog/powering-the-rift/ -
You may be able to do this (get an image that kind of fits the screen) but you will not get the rotation of the head that is the whole point of VR...playing a flat video is possible with normal video glasses at a fraction of the price. There are also plenty of stereoscopic ones.
The DK2 is a 1920*1080 display and has refresh options of 60 or 70hz.Note oculus recent software update leans toward not letting the screen be seen as a display to the OS, but controlling directly from the video card hardware. This mode is implemented in Win systems. OSX development has been frozen for some time but will follow suit. This gives much better images and response times.You will have a loot of trouble filming with 2 cameras and getting good 3d in the rift.You may have better luck using 2 points of view of a 3d scene as much of the rift work does, if you get the offset between the perspectives correct you will get good 3d.To use the oculus for your own use at the moment you will need to write code or use vvvv -
Agree with Fred,
you can check the link :http://vvvv.org/contribution/oculus-rift-dk2-0 -
mmmm.... it sounds like for video it needs a custom plugin. But what about izzy 3d? It looks like setting 2 virtual stages with the right distance cameras and mapping them to one stage through izzymap could do the trick isn't it?
I don't have an oculus yet but I can sent the image to the phone and my google cardboard.... I'll try -
Sending to google cardboard over tcp will be a lot easier. The cardboard framework takes care of the subtle optical distortion and colour convergence adjustments needed to make the image look ok that close. The oculus does this too, but it is done with shaders as part of the pipeline in the SDK.
The vvvv implementation is great.