... I remember using a mark/space selection to over drive a LED at Uni to increase the brightness of a standard red LED so instead of 12ma we used 24ma but only for 50% of the time and we had to measure the output making a graph etc. The light output did seem to increase to the eye but the lux meter did not record this increase rather poor results as far as I was concerned at the time. ... However we were told this was a standard way to increase output from an LED and that has now started me rethinking about the results and I wonder if a Lux meter can really show the perceived light from a HF device when the mark/space ratio is not equal? ... In theory lumen should reflect the human eyes ability to see but I am not sure that any measuring device can do this with unequal mark/space ratio and this would explain the difference perceived.
You make some very important points. Although, as you say, lumens relate to the brightness of light as perceived by humans (e.g. taking into account varying sensitivity of the eye to different wavelengths etc.), a problem arises when the light intensity is not constant. The human eye will tend to perceive something close to
peak light intensity (the photochemical nature of the retina having the effect of introducing what one would call 'persistence' in a CRT), whereas a meter will measure some sort of average intensity - hence, as you say, brightness as perceived by humans will tend to increase (for same average light level) if the duty cycle ('mark/space' ratio) is less than 100%.
This was not much of an issue with incandescent lamps, even when dimmed by waveform chopping, since the thermal nature of the light-generating process largely 'smoothed out' any variations in light output. However, modern light sources, particularly LEDs, have the ability to literally reduce light output to zero during the 'space' parts of a sub-100% duty cycle, hence creating the potential for human/meter assessments of brightness to be very different.
Whether any of this is relevant to what you're discussing, I don't know. In other words, I don’t know whether (m)any manufacturers of LEDs employ duty cycles less than 100% (other than, perhaps, for dimming*) in order to increase perceived brightness. One might hope that they do, but this, of course, would cost a little more money than a simple 100% duty cycle.
[* in view of what we’re discussing, dimming by duty cycle reduction might be expected to be non-ideal, in terms of eye-perceived brightness, for LEDs]
... It would also explain the problems I had using a camera and it's readings to compare light. I had at first thought it was UV and Infra red components causing the higher than expected readings with camera from the CFL to LED now I think it may be the mark/space ratio.
I’m not so sure about that. Whilst the aspects of human perception we’re discussing can result in a situation in which (visible) light is perceived as
brighter by a human eye than a meter suggests, I can’t see how it could explain meter readings being
higher than would be expected from the perceived brightness of (visible) light. On the other hand, if (as you mention) the meter were also measuring non-visible light (UV & IR), or even just ‘over-weighting’ parts of the visible spectrum to which eyes are less sensitive, then it could well produce ‘higher than expected’ readings.
Kind Regards, John