New website up
After a long time working with my own custom joomla version, I decided its time to give it a slight overall
(Especially for working with bigger resolutions

Please be aware that some old stuff is still missing until
I shift all stuff to the new server


EDIT: sIBLstage for controlling and sIBLlight for profiling lights are now out for testing,
and I will write a post mortem of the development process in the commong weeks.

On to the old article...


I primarly thought this as a comment on the podcast itself but since it includes so much stuff and it might get burried I decided to put things here as an article. Some of the stuff described here I will  try for myself if I find the time and budget, for the rest I'll let others explore my train of thought.
Again its actually more of a braindump / brainstorm.

Word of warning, behind every sentence there should be actually be a question mark instead of a period

First parallel lights in small environments is a thing for sun simulation, there is a youtube tutorial on this:

Since LED stage lights become smaller and smaller this might be feasible to standard lights
The simulation of the sky color would be part of the led stage system.
(Although I'm not sure on this)

(EDIT (2024): Godox seems to offer a parallel beam modifier now see here )

Second, about having additional lights to support LED lighting even on an artistic level seems very important
Gaffer and Gear has an interesting video about it (no need to watch to follow this article):

Well this was one of the reasons an old idea, use an hdri to control real lights, resurfaced. (Having initial thoughts about an hdri controlling lights during the sIBLedit times and at that time there was another user asking for this in the hdr labs forum)


What is needed is a relative simple program on a laptop, wich reads the hdri values of a pano and sends the information via dmx to control real rgbwa led stage lights. So you get the brightness and full spectrum value of the light. (EDIT: Ideally the tool should allow you to control all types of lights...rgb / rgbw, rgba, rgbca, etc.)

Add to this a cheap smartphone with a motion sensor strapped to the back of the light, so you can correlate the position of the actual light with the one displayed in the program. Use the actual or a slightly different position on the map to control the light color and intensity.
...ok the smartphone needs to be reseted to the center of the stage and even an orientation point might be needed, although the later could also be done by the app itself.

In the past in the pure cg land we used around 16 spotlights to simulate environment lighting without radiosity. (using a median cut algorithm to position them).  And if you consider how cheap rgb lights have become you might get pretty decent lighting without a led stage at all, allthough multishaddowing could be a problem, and you won't get accurate reflections from such a system.

Thinking further you might also correlate the light color with a light gel database so you could use non-colored (kelvin based) lights also in such an environment.

Since I don't have the budget and time to test this method by myself, I publish my thoughts on this for others with more
ressources.  (Although I propably will code one day the app for it, and try it on a smaller scale)

EDIT (2022): Started to code on the project, but it will defintive be on a smaller scale, since my ressources (money, tools, access, energy and time is low)


Considering how cheap the needed equipment is:
You can get color rgb lights for a very low price,the usb2dmx controllers cost less than 50 bucks, and a smartphone with motion sensor is also in the sub 100 bucks range, this might be an option for smaller studios, or even just in photo studios. (product photography: the typical shampoo bottle in front of moving grass / field)


Programming wise the biggest problem is the adaption to various dmx controllers (doesn't seem to be standarised), and make an HSV to RGBWA converter. And in the later stages a way to send portions of the hdri to standard LED panels for reflections and background purposes.
All the rest is pretty easy and is just a matter to interface it as approcheable as possible. 

Again just a small braindump on this..might be wrong on so many levels...light calibration to the dmx commands comes into mind...since I didn't made any real experiments on how well this works take it all with a very small grain of salt.... I might update this article during the next two years to post my findings.

EDIT (2022): started to post some progress reports in the discussion thread. (Wich I invite everyone to join)
Updates on the project can be found in the forum

I know this article sounds very Dunning Kruger, but I think its a good thing when you start a project especially if its experimental / no one else depends on it: You should not have doubts at the beginning of a project, they come during execution anyway, and as long as there is a honest post mortem, everythings fine.

EDIT (2024):
This article is VERY Dunning Kruger and even now my view on the project stilll is. But I don't think I would have started the project if I would have known how much this would expand feature and knowledge wise and how long this on/off weekend development would take.

Also worth to mention is another project called cybergaffer wich uses a quite different approach to the problem. Worth a visit. The mirror ball correlation I came up with is heavily inspired by their site.


Longer 2024 addendum some kind of a intermediate post mortem and summary of the forum:

The smartphone positioning idea has been currently canned. As I found out the cheap smartphones usually ship with a very limited or even fake motion sensor. An additional idea I had at the time was to expand it in future to a general purpose app wich would allow you to record light position and orientations for on set supervision in general. (I would love to be the person who is the reason to introduce selfie sticks for vfx supervisors running on set, but this has to wait :) )
Instead, inspired by the cybergaffer website, I now provide mirror ball based positioning:
Meaning you film a mirror ball instead of the subject and align the video feed with an overlayed hdri, so you can match the position of the light. (and not the other way round like the cybergaffer guys do: Aquire the light positon to get the light color said both concepts are interesting, and it will be interesting to compare both strategies)
Also a general purpose radar view wich gives you the general direction and height has been added wich might be just good enough for quick setup.

The dmx controller problem can also be solved by adding art-net support. I just found out that these can be aquired relatively cheap so its pretty propable I might look into it some time this year if I got spare cash.

Surprisingly the color engine (cogine) itself wasn't the "big" problem besides some minor hickups: Its pretty easy to build up a channel agnostic dmx controlled light once you get the basics.  Wheter its rgb or hsl controlled. The bigger problem is the time you need to invest to collect the data, and how to enter it in the most convinient way in the system. Things like xou have to define a custom whitepoint. (see next paragraph). Thats currently my biggerst problem: Don't confuse users with too much options and make the data aquirement a lenthy process.

Another surprising/interesting byproduct so far was the custom whitepoint calibration of the light by camera: You film a greycard illuminated by the light and use the hdmi feed of it (via a hdmi to usb adapter so it is recognized as a webcam feed) to adjust the light values until the saturation is at a minimum. Then cogine use this value to desaturate to. My cheap lights do look signifcantly better (and remove the bluish / magenta tint you find in some of those cheap products). Also my spectrometer gives slightly better values. This of course has the price that the maximum firepower is slightly lower. I might put this and the whole cogine into a seperate app, since it might be beneficiary for others.
This also means you could calibrate the lights to an camera LUT if the feed allows you to output it. Didn't check if a greyball would work (so you could mesure lights wich are at an angle to the camera), but in theory it should.

Another big problem on a theroretical level, The relative to absolute conversion of hdri values. is partly solved by allowing you to custom modify the light outputs on a channel, light and global level. This and a good correlation between the lights get you pretty far.This not only bypass the problem to the user, but also provides you with the needed artistic freedom.
But working on it also brought up an interestng concept to do it on a stage / led backdrop panel level:

You display a portion of the hdri on the led panel/volume, just big enough to be easily measured by a luxmeter, and correlate the measured value with the sum of the floating point pixel values of that section on the hdri (for example 1000 lux = 900.0 in pixel values). Now you calculate the sum of the pixels at the light postion (lets say 1800)  and correlate it to the measured maximum output of the light (lets say 4000 lux).
In the end you can compare both to get the absolute light value in relation to the panel (in that case the light would run at 50% brightness). By doing it that way, everything in the volume (camera, panel, lights and hdri used) is calibrated to the element wich has the least amount of firepower: The LED panel. I would love to test this concept, but for this I need a big cheap panel / tv wich I currently can't justify to invest in. I'm not sure how and its possible to adapt this on a green screen level   (Maybe a simple keyer preview so you correlate the lights to the camera ?)...anyway don't promise anything but its on my experimental list

The Dunning Kruger effect wich completely pushed the scale of operation out of my mind was actually a good thing. I'm not sure if access to a real stage and good lights would have forced me to come up with a lot of the solutions. Also development on that scale would have been exponentially more expensive than my current setup. Wich I bought over the years before and during the project. And even then I always had a multi purpose use in mind...I think the overall equipment cost JUST for this project is currently around 250 bucks. The rest can and is used for other projects / hobby etc and would have been aquired anyway. I wish I would had more continious time dev time on would have shotened development time significantly and kept the motivation level constantly high.

And btw the reason I didn't publish a preview version during current development is pretty quickly explained: A lot of the fileformat changed during development, and I don't want to come up every two month and say "ehhh the lights need to be recalibrated"..or write internal converters fo intermediate formats.