Hi JTR - would you consider switching your standard test for these power LEDs to 60 C instead of 25 C ? It seems that most of the power LED mfgs now are not far from their specs for 25 C, but this data still remains - well - interesting but not that valuable.
Your test has the potential to be more realistic compared to the mfg specs as a 60 C test would be what real flashlight and lighting uses are like. Frankly, the only downside of your exhaustive testing is the new false claims that people make about their OTF lumens claims - and then they reference your data as the basis for the claim.
Just MHO, take it FWIW.
Thanks for all of the work.
Right now I'm not really measuring at 25°C baseplate. The LED is mounted on a 60 mm square heat sink which has some air flow. The heat sink does indeed get somewhat above ambient temperature as I ramp up the current (this is especially true with 4-die LEDs such as the P7 or MC-E), and the ambient temperature itself in the room is subject to variability throughout the year (although not much because the room is heated in winter and cooled in summer). What I'm doing is kind of halfway in between constant 25°C baseplate and constant 60°C baseplate. I'd roughly estimate my standard setup has a thermal impedance on the order of around 1.5°C/W (plus additional losses at the thermal interface between the emitter and heat sink). Overall this setup is a lot more realistic than data sheets which usually assume a constant 25°C junction temperature. In fact, my relative measurements between 350 mA and 1000 mA in my recent XP-G test were less than the data sheet precisely because I didn't keep junction temperature at a constant 25°C (indeed I couldn't do so unless I cooled the heat sink below ambient).
Besides that, I'm confused as to whether you would want me to standardize on 60°C baseplate temperature or 60°C junction temperature (manufacturers tend to use junction temperature). I probably couldn't even do the latter except by using the manufacturer's
estimate of junction to thermal pad impedance, measuring power consumption, calculating the rise of junction temperature above heat sink based on the measured power consumption, and then adjusting heat sink temperature accordingly. As heat sink temperature changes, Vf and therefore power consumption changes, so that means some recalculation of junction temperature rise. I dread to think of the logistics of trying to do this at one data point, much less at 30+ data points. Controlling base plate to a constant temperature is a little easier, but still not without problems. It takes a while until you adjust the thermoelectric module current just right to keep your heat sink at constant temperature. Maybe I can do this automatically via a PID controller, but even those still take some time to stabilize at setpoint.
I'll grant that not controlling either baseplate or junction temperatures precisely introduces some variability, but it also represents what can be achieved in the real world with halfway decent cooling. At low currents the emitter will be fairly close to ambient temperatures just as it is in my tests. At higher currents it may indeed be warmer than in my tests but this is relatively easy to correct. Looking at most of the spec sheets the difference in output between 25°C and 60°C baseplate temperatures is about 5-6%.
Believe me, it might be nice to standardize on some sort of constant temperature for testing purposes, but after testing a few emitters this way the HUGE amount of extra work relative to the knowledge gained seems minimal. For example, let's look at these two curves for the neutral and cool K2s:
The differences here between 60°C baseplate and my standard testing amounts to 6-7%. However,
I could have obtained virtually the same plot by just looking at the data sheets, and applying a correction for temperature. The only noteworthy item I discovered here not obtainable from applying the data sheets was that the K2 neutral white falls on its face at a lower drive current than in my standard testing. But note that this occurred above the rated current of 1.5 amps anyhow. It would be of little interest to any reputable manufacturer who would keep current ratings within specs.
Anyway, it's not that I'm against doing things the way you suggest in principal but I see a couple of problems. First off, I lose any direct comparison between previous tests and current ones (unless I perform tests both the new way and old way). Second, after doing a few LEDs this way it's just too time consuming due to the logistics mentioned earlier. It takes about an hour to set up everything, and then probably another few hours to gather the data. Fortunately for us white LEDs don't vary their output by a huge amount with temperature. You need to increase temperature by 45°-50°C to decrease output by 10%.
It's fairly easy to guestimate output at 60°C baseplate or any other temperature by applying the relevant correction from the manufacturer's data sheet. I
strongly encourage anyone citing my data to correct for temperature rise in this manner (and also to correct for any optical losses). The thing is with so many different applications out there I can't possibly have data for all scenarios. Some lights may get much hotter than 60°C. On the other hand, I'm aware of larger flashlights, or general lighting applications, which actually mimic the conditions in my tests (i.e. heat sink doesn't get much higher than 10°C over ambient).
I hope none of this post came off as overly dismissive as it wasn't meant to. You offer valid suggestions to make my tests more meaningful. And in fact if the logistics weren't against it I would probably follow your suggestions. It's just that during the last two years I've find myself with a lot less time to devote to the experimentation I enjoy due to the need to earn money and/or take care of personal matters at home. I WILL try to do a few constant 60°C tests on some common emitters in the near future as I said I would just to see if they reveal anything new. But after that I doubt I'll have time to do much beyond my usual testing. Truth is after doing a lot of the things I have to do I'm just too fatigued to do the things I want to do.
EDIT: To add a bit more here the way LEDs are trending I'm seeing less and less point doing testing at elevated temperatures. For example, look at this chart for the XP-G:
The red line represents total input power to the LED and the white line represents waste heat. In the coming years the white line will be trending towards zero even though it will obviously never get there. In light of this, I tend to think our flashlights and general lighting will operate ever closer to room temperature. This is especially true of portable lighting. There are limits to the amount of energy any cell can store, and this is improving very slowly compared to LEDs. For any given runtime, this represents a hard limit on the amount of power delivered to the LED. As more of this power comes out as light due to improved LEDs, thermal pad temperatures will continue to approach ambient temperature.