# Cree's Binning 50+ LED's tested



## cmacclel (Dec 20, 2007)

Post removed.


Testing was not valid as Cree CR-E LED's cannot be *comparitively* measured at an angle. When the LED's where re-measured with the light meter directly over the LED the variation was only 10%.


Mac


----------



## skalomax (Dec 20, 2007)

Thanks Mac.
The rebels are pretty impressive.


----------



## Mash (Dec 20, 2007)

Wow, thats some variance there.
No wonder people are complaining that their upgrades sometimes are dimmer than the original!
How was the color tint of this largish sample?
+1 on the rebels! Bright and binned properly.


----------



## johnny13oi (Dec 20, 2007)

Man that is a huge amount of emitters, wish I had that many to play around with.


----------



## matrixshaman (Dec 21, 2007)

Thanks for the all the work and the heads up on what I suspected - although not too quite that degree. oo:


----------



## Alan B (Dec 21, 2007)

Great test and interesting data.

Is measuring the intensity at one point an accurate indication of total flux?:thinking:

-- Alan


----------



## VidPro (Dec 21, 2007)

:thumbsup: thanks for the data


----------



## johnny13oi (Dec 21, 2007)

Wouldn't say placing a white box over it and then measuring the reflected spill a better indication of outputted light? These emitters could be putting out light unevenly causing the huge differences.


----------



## chris_m (Dec 21, 2007)

None of the previous test results I've ever seen for XR-Es with proper integrating light boxes has ever shown results anywhere as far away from the specified bin as your standard deviation (let alone your outliers) would suggest you statistically should get. Applying Occam's Razor, the most likely explanation is a flaw with this test procedure.


----------



## cmacclel (Dec 21, 2007)

johnny13oi said:


> Wouldn't say placing a white box over it and then measuring the reflected spill a better indication of outputted light? These emitters could be putting out light unevenly causing the huge differences.




Why would they be outputting uneven light? This was just one of those "Let me check something" idea's and is no way a scientific study  

The Difference in brightness from emitter to emitter was very noticable.


Mac


----------



## VidPro (Dec 21, 2007)

cmacclel said:


> Why would they be outputting uneven light? Mac


 
i can think of one possible reason for uneveness, that emitter texture, and the emitter grid pattern, which shows up like crasy with a aspheric projection, or under a scan or micro view of it.
the old Luxeon things only had "bond wires" as something that gets in the way, but all this new stuff with the light releacing microhole phosphors has a pattern to the ouput. then the cree has a connection grid besides.


----------



## cmacclel (Dec 21, 2007)

chris_m said:


> None of the previous test results I've ever seen for XR-Es with proper integrating light boxes has ever shown results anywhere as far away from the specified bin as your standard deviation (let alone your outliers) would suggest you statistically should get. Applying Occam's Razor, the most likely explanation is a flaw with this test procedure.




Do you have a link to these previous tests? I have built a good amount of Lights using the Cree XR-E LED's and have noticed huge differences in light output from emitters in the same reel. This test of mine just confirmed by findings.

I never had a problem with Lumiled's binning. You KNEW what you where buying. With Cree's binning it's a crapshoot. The Colors, Forward Voltages, and Brightness vary greatly within the same bin off the same reel.

Mac


----------



## chris_m (Dec 21, 2007)

Just various test results on here, and elsewhere. Not seen somebody test as many as you have, but enough different test results that it would be statistically very unlikely for one of them not to have thrown up a binning error if the LEDs they were testing had the same variation your results do.

Crees are supposed to be binned for colour and brightness. The variation you're seeing suggests there's something wrong in the binning QC - maybe you've just been unlucky with your particular source.


----------



## cmacclel (Dec 21, 2007)

chris_m said:


> Just various test results on here, and elsewhere. Not seen somebody test as many as you have, but enough different test results that it would be statistically very unlikely for one of them not to have thrown up a binning error if the LEDs they were testing had the same variation your results do.
> 
> Crees are supposed to be binned for colour and brightness. The variation you're seeing suggests there's something wrong in the binning QC - maybe you've just been unlucky with your particular source.



The LED's I tested where from the following sources

P4's Authorized Cree distributor
Q5's 2 Separate group buys here on CPF
Q2's Authorized Cree distributor
Q2 Cutter

So I doubt there is a source problem as the LED's I have on hand have come from various sources.

As for Cree's binning QC I can't believe they have a QC department for binning with the variations I have seen. It can go from 6000K tint to a 3000k tint in the same reel 

Mac


----------



## EntropyQ3 (Dec 21, 2007)

cmacclel said:


> The LED's I tested where from the following sources
> 
> P4's Authorized Cree distributor
> Q5's 2 Separate group buys here on CPF
> ...



First off, thanks for making your data available.
It is incredibly useful for people who would otherwise only deal with small numbers or even individual samples, to get an idea of the normal spread of a given product. Helps immensely with troubleshooting.

As others have said, your data would be easier to interpret if you had made some kind of "total light output" estimation, never mind that it wouldn't be up to the official standards. Lots of folks get reasonably repeatable results from pretty simplistic setups. As long as you don't have some such device however, you will find that people would rather believe that your methodology is flawed, than to accept that your data reflect luminous output. Ungrateful *******s.  You say that the differences in output is obvious to the eye as well as the lux meter. That implies that variations in output patterns is not the sole explanation (and it would be a problem in itself if it were).

Looking at your data, I realize that I may have been far too trusting in accurate binning, and too inclined to look for problems in places where, perhaps, there were none. I admit the spread looks extreme, far beyond what even rather lax deviation standards should allow.

If you could make some integration sphere hack, it would be great.


----------



## cmacclel (Dec 21, 2007)

I'm not going to take this testing any further. Is my testing flawed? Thats is quite possible *if* the light emitting from the XR-E lens is uneven as some people have mentioned but then that means there would be huge variations in the Manufacturing process which I find hard to believe. Each LED was placed on the heatsink in the same exact location and orientation.

Also look at the 3 rebels I tested with the same test setup. They are within 2% of each other. 

I have 40 of the 0100 Rebels at home maybe I'll test a few more.

Mac


----------



## chris_m (Dec 21, 2007)

EntropyQ3 said:


> As long as you don't have some such device however, you will find that people would rather believe that your methodology is flawed, than to accept that your data reflect luminous output. Ungrateful *******s.


I'm struggling to see where his flaw is - it's just that as I said before Occam's Razor suggests that it's more likely there is one than that everybody else is wrong. If I hadn't seen several other consistent testing results I'd be more prepared to believe that Cree are dishonest and don't actually bother binning their LEDs.

I don't believe the flaw (assuming there is one) is to do with variation in light output across the beam - I'd expect the beam pattern to be more consistent than the light output. About the best I can come up with is a fault in your power supply, given correlation between your light meter and Mk 1 eyeball.


----------



## cmacclel (Dec 21, 2007)

I had a calibarated Fluke True RMS DVM measuring current at 347ma  I had my variable supply set to 4.2v with the current limiting set to 350ma. When I first saw the huge variances between LED's I also questioned my setup.

Mac


----------



## AvPD (Dec 21, 2007)

What's the process when the LEDS are manufactured? Are they similar to computer chips which have "yield" rates? Do some have less defects than others and the "bin" categories represent the ones that work from best to worst? Or is the machinery retooled every few months and they all come out the same?


----------



## EntropyQ3 (Dec 21, 2007)

AvPD said:


> What's the process when the LEDS are manufactured? Are they similar to computer chips which have "yield" rates? Do some have less defects than others and the "bin" categories represent the ones that work from best to worst? Or is the machinery retooled every few months and they all come out the same?


I'm not sure anyone not working on Cree knows the details. The LEDs are probably similar to computer chips in so far as them being produced in batches, but there still being some individual variability.

If we assume that the data cmacclel has contributed is proportional to luminous output, then that could imply for example that Cree made batches of a few thousand, tested a sampling of these, and labeled that particular batch according to the outcome of their samples. It could also be that all batches where their samples failed to comply with their minimum criteria for a given upper batch are labeled "P4" or whatever the lowest batch is called at the time, leading to extreme variability in output in samples from the lowest batch.

I plugged his data into Excel, and:
30 P4 samples yielded 
Average output:341
Standard deviation:58

12 Q5 samples yielded
Average output:423
Standard deviation:72

Note that comparing averages, his Q5 samples averaged 423/341=1.24 or 24% higher output than the P4s, whereas Cree bin data would suggest 110/84=1.31 or 31% higher output for the Q5. This is pretty close, particularly considering that the Q5 bin is the top drawer, and would thus tend towards the lower part of the bin, and the P4 is the current bottom drawer.

So his average results are, bearing in mind the smallish sample on the Q5s, pretty much in line with what we would expect, it's just the _spread_ that is shocking. Indicating perhaps, that the bins actually just indicate where the average luminous output of that particular batch ended up, rather than having individually binned chips.

This is what the data indicate, and lacking better information, I'm inclined to believe this is the case. 

Of course, it makes the value of paying for a higher bin that much more of a lottery.


----------



## Mash (Dec 21, 2007)

cmacclel, I truly hope you dont take the discussion in this thread the wrong way. I am sure we all appreciate the time and effort you put into doing this.
Your results are certainly very interesting and thats why it has generated this debate. Either way you look at it, one group or another will have difficulty accepting the results!
Some who prefer to trust in the binning procedure of the manufacturer (when buying its all we got to go on really, short of checking every LED we buy) will doubt your testing, while others will agree with you.
Regardless I thank you for your effort, and encourage you to create even more debate!


----------



## cmacclel (Dec 21, 2007)

I have about 50 Q5's I see if I can check them over the weekend.

Mash no offense taken at all 

Mac


----------



## TranquillityBase (Dec 21, 2007)

Nice job Mac

Excellent thread:twothumbs


----------



## EntropyQ3 (Dec 21, 2007)

This http://www.cree.com/products/pdf/XLamp7090XR-E_B&L.pdf is where Cree talks about binning.

I quote: 
"All XLamp LEDs are tested and sorted by color and brightness into a unique bin. Each bin contains LEDs from only one color and brightness group and is uniquely identified by a bin code."​ 
While it cleverly avoids stating that the LEDs are _individually_ tested, it sure implies it.​ 
In their product pdf http://www.cree.com/products/pdf/XLamp7090XR-E.pdf, they refer to bins exclusively, giving no values for spreads at all for what actually goes into a particular bin (implying absolute), but they do say this:
"Cree maintains a tolerance of +/- 7% on flux and power measurements."​

As far as I have been able to find, this is what Cree has to say on the subject. It's interesting to note what they don't say explicitly. Reading their pdfs with the results above in mind, I still can't say if the results of cmacclel is actually within the bounds of their binning process, only that it isn't what the docs would lead me to believe.


----------



## VanIsleDSM (Dec 21, 2007)

Alan B said:


> Great test and interesting data.
> 
> Is measuring the intensity at one point an accurate indication of total flux?:thinking:
> 
> -- Alan



Nope, not at all.. 


In my experience and the 40+ CREE I've gone through I'm very pleased... each LED in every bin looks exactly the same.. there is definitly no 3000-6000k variance as suggested.. that's total BS.. the colour is bang on.

Intensity all looks the same to me too.. 

I think there has to be a flaw in the test.. possibly with the LED driver as much as the fact that only intensity in 1 area is measured.


----------



## cmacclel (Dec 21, 2007)

VanIsleDSM said:


> Nope, not at all..
> 
> 
> In my experience and the 40+ CREE I've gone through I'm very pleased... each LED in every bin looks exactly the same.. there is definitly no 3000-6000k variance as suggested.. that's total BS.. the colour is bang on.
> ...



You say "Total BS" 

How would soneone not take offense to that comment.

From the first day the Cree XR-E was released it is well known that there tints vary greatly. Many people have mentioned this but I guess I'm just new here and have no CLUE at all what I'm talking about.

You said intensity to you all looks the same................. I guess I bought this light meter for nothing 

Mac


----------



## MatajumotorS (Dec 21, 2007)

Exactly - "*looks *exactly the same" - try comparing them side-by-side, you will see the diference.


----------



## WeLight (Dec 21, 2007)

cmacclel said:


> The LED's I tested where from the following sources
> 
> P4's Authorized Cree distributor
> Q5's 2 Separate group buys here on CPF
> ...


''
We are an authorised Cree distributor also, check the Cree web site. Your results statistically dont stand up especially when you compare to only 3 Rebel and you dont have the specific details of a rigorous test. The Q5 purchased on Group buy, where did they originate. I remain skeptical about China resellers and other results published here showing their high Q bins dont meet even P bin specs ?


----------



## jirik_cz (Dec 21, 2007)

Interesting test, but very confusing for me.
Crees are known to have tighter beam with higher intensity in center than luxeons. Probably even Cree P4 should be brighter than Rebel 0100 in center of beam. But your numbers are absolutely different, how is that possible?

You can see it on this graph from evan9162 (from Cree XR-E Q4 evaluation)


----------



## VanIsleDSM (Dec 21, 2007)

I'm sorry but it is total BS.

There is no way A 3000k binned CREE will look anything like a 6000k, or vise-versa, never has, never will... Whether you were exaggerating to make your point, or you seriously think that.. it's BS either way.

As just pointed out.. the CREEs only have a 90 degree angle.. therefore they should be more intense... there is a flaw in the test... CREE bins for minimum flux... your results just don't make sense.



Something i have never done is buy any CREE LEDs from china or any group buy... maybe that's the problem.. fishy LEDs... 

I was thinking about making a big order from DX, maybe I won't...


----------



## greenLED (Dec 21, 2007)

Mac, I think you're onto something here. Could I suggest you place your setup inside a box and take measurements from "bounced" light inside the box instead of the actual hotspot? 

Doing these tests takes a lot of effort; thanks for taking the time.



WeLight said:


> ''
> We are an authorised Cree distributor also, check the Cree web site. Your results statistically dont stand up especially when you compare to only 3 Rebel and you dont have the specific details of a rigorous test. The Q5 purchased on Group buy, where did they originate. I remain skeptical about China resellers and other results published here showing their high Q bins dont meet even P bin specs ?


I disagree. Mac's sample size is adequate to at least make comparisons between Q5 and P4's. Plus, we could start making inferences about those emitters that do not come directly from "the" source.


----------



## cmacclel (Dec 21, 2007)

VanIsleDSM said:


> I'm sorry but it is total BS.
> 
> There is no way A 3000k binned CREE will look anything like a 6000k, or vise-versa, never has, never will... Whether you were exaggerating to make your point, or you seriously think that.. it's BS either way.
> 
> ...






You make it very obvious you have NO CLUE at what your talking about. How many Cree XR-E LED's have you powered up? I have been working with them since they where introduced and have purchased HUNDREDS. I would say 90% of my purchases where purchased directly through a Cree authorized distributor.

The tint binning is pathetic when it comes to consistency. I HAD to make it a point to screen all tints before installing them into multiple LED lights as the tint CAN be completely 100% positively different from the same reel.

If your trying to **** me off with your ARROGANT REMARKS it's working. 


Cutter as for the Rebel high numbers they mean nothing when it comes to brightness as the XR-E's and Rebels have a completely different beam pattern. Those number where posted just to show how close each LED measured compared to each other.


----------



## VanIsleDSM (Dec 21, 2007)

I already stated in my post how many I've gone through...

It doesn't compare to your hundreds, but it's enough of a sample for me to have something to say about this thread..

I've ordered 4 different types of colour and flux bin, and each one was exactly as expeceted.. there were no dramatic colour differences at all.. 3000k-6000k is absurd.. where's your proof? until I see pics of that I call BS.

Lets not get heated.. 

I just can't believe that.. from my experience there was nothing of the sort.

Too bad it's impossible to buy rebel 100s? future never has them anyway.. I've never seen any other dealer.


----------



## SemiMan (Dec 21, 2007)

While I have never found Cree to be as consistent as Lumileds w.r.t accuracy within their measured bins, these results do not make sense. I agree with some other posters. Something must be wrong:

- I have some observations and some questions

a) Do you have a photography of your setup? A picture is worth a thousand words. It would help a lot in understanding what you are doing.

b) You said the LEDS are right off the reel. How are they mounted? Have they been put on stars or are you clamping them down some how?

c) How long are you allowing them to thermally stabilize? What is the thermal interface to the heat sink? 

1) If you are not paying on the order of $2,000+ for a light meter or using a spectrophotometer, your results measuring an LED can be off by quite a bit, however, we are talking 25%+, not 100%. As the Cree color temp is not flat across its output, this is going to cause even more variance.

2) I am not familiar with your specific meter (though I am with meters in general). With Cree's narrow angle output, you are likely going to get quite a wide variation in readings when you are only 7 inches away. Your illumination source is a point, but you are projecting onto a dome that is wide and fairly close. This is about as non-ideal a situation for accuracy with your meter. The rebel is also a point, but its output does not vary much with angle around the center so you are likely going to get more consistent results.

3) You absolutely should get higher readings with Q5 versus the Rebel, simply because of the beam angle of the Cree. 

4) Generally the same bin is quite consistent w.r.t. color. Certainly not 3000-6000K. Frankly, I have never seen anything near 3K out of a cool white Cree. Now 5K-10K, yes that I would believe and that may even be in one of their bins? ...need to look that up. Perhaps when they are first releasing product (samples essentially), they put whatever they make that hits the brightness on the same reel? I do not think that is true for production.


I think whenever one publishes data, there is a duty of care that must be taken to ensure meaningful data. I would encourage you to change your measurement set up. As some have suggested a light box would be better. A simple set up would be to diffuse the output through a piece of white paper 1/2 between the LED and the light meter. Yes there is color change, but flux should not be impacted greatly. Your light meter looks fairly sensitive so you should still have lots of light for measurement. It would also be nice to see pictures of your test set up including how the emitters are mounted.

You mentioned you purchased your P4 from an authorized distributor. Is there an issue sharing who that is?

Again, while I think Cree does have more variance than Lumileds, the variance you have does not correspond to the what I have seen and measured. That could indicate other issues in your LED mounting, drive, or where you got them from. Let's take the emotion out of this thread and see if we can get to the bottom of this.


Semiman





I was going through my stash of LED's and decided to compare Cree's Flux binning. Just like Cree's VF and color binning the Flux is all over the place 

It's 100% Clear to me that you NEVER KNOW what your getting when you purchase Cree XR-E Led's

I setup a simple test.

1) Massive aluminum heatsink
2) Vise head holding a Meterman LM631
3) Constant Current power supply set to 350ma
4) A bunch of LED's

1st batch of P4's 35 pieces all from the same reel just removed from the factory packaging.

The heatsink LED location was approximately 7 inches away from Light Meters head.


----------



## 270winchester (Dec 21, 2007)

nm.


----------



## cmacclel (Dec 21, 2007)

I just re-checked a few of the LED's with the meter sensor straight over the led's and the results a very different, the spread is much closer.

I took 2 led's that where over 200 points different when I previously measured them from the side and using the straight on measurement they where within 10-12% of each other.

I have no idea why comparing the Cree LED's from the side would cause so much variance. The Rebels no matter where the measurement was taken where comparable.

I did measure 10 more rebels and they all where within 3% of each other from the side or straight on.


As for the tint variance that is still a big issue for me. Even my wife notices the difference! and I know she's not yes(ing) me to death 


Mac


----------



## Alan B (Dec 22, 2007)

Thanks for refining your tests. Sounds like we are learning some useful details. What angle from straight ahead were the earlier masurements taken? 

Thanks, Alan


----------



## SemiMan (Dec 22, 2007)

Rebels have a very pure lambertian pattern. The variation in output as you change the angle is very gradual. The Cree do not have a lambertian pattern, but behave like they have an optic on the front. The variation w.r.t. the angle is quite large depending on the angle.

Semiman






cmacclel said:


> I just re-checked a few of the LED's with the meter sensor straight over the led's and the results a very different, the spread is much closer.
> 
> I took 2 led's that where over 200 points different when I previously measured them from the side and using the straight on measurement they where within 10-12% of each other.
> 
> ...


----------



## WeLight (Dec 22, 2007)

greenLED said:


> Mac, I think you're onto something here. Could I suggest you place your setup inside a box and take measurements from "bounced" light inside the box instead of the actual hotspot?
> 
> Doing these tests takes a lot of effort; thanks for taking the time.
> 
> ...



I call your disagree and raise it, I have a stats book around here somewhere that says at an absolute min you would need 300 pcs and preferably 1000 to be statisically relavent. There remains to little data on the test methology. If you tested P4 bins how old were they, did you reflow solder them to a PCB and if so did you prebake the leds, because if there was moisture under the lens(happens to non vac sealed leds after a relatively short period) when you reflowed you can damage the optical charateristics of the led which in turn alters the flux and colour of the output


----------



## SemiMan (Dec 22, 2007)

Well technically since a reel is supposed to be all from the same bin, you really only need to have two that are more than the measurement tolerance outside the bin to prove that there are quality control issues.

Semiman


----------



## TorchBoy (Dec 22, 2007)

cmacclel said:


> ... when I previously measured them from the side ...


Er, sorry, did you just say your first measurements were _from the side_?


----------



## Stillphoto (Dec 22, 2007)

As Mac already stated, this was just sort of a quick experiment into this, and not being held to scientific standards. Yeah the best way would be to get 1000 of the same bin and an integrating sphere and have at it, but who wants to front the money for a study like that? 

Personally I feel that Cree should be doing a bit more testing. Or at least taking a random sample of a couple hundred every now and then and run some diagnostics on them. I'm sure they do to some extent, but maybe not enough?

Thanks for the info Mac. Interesting to know how much light varies coming from the side, considering reflectors are collecting that and kicking it forward. So while the side measurements might have a greater varience than the readings taken head on, it's still some food for thought. 

My god, stats class is coming back to me..varience...lol


----------



## TorchBoy (Dec 22, 2007)

Stillphoto said:


> stats class is coming back to me..varience...lol


Huh. And you said you'd never have a use for all that stuff. :thumbsup:


----------



## EntropyQ3 (Dec 22, 2007)

Stillphoto said:


> As Mac already stated, this was just sort of a quick experiment into this, and not being held to scientific standards. Yeah the best way would be to get 1000 of the same bin and an integrating sphere and have at it, but who wants to front the money for a study like that?




Not pointing fingers at you, but... anyone who thinks doing science requires thousands of experiments haven't been doing science. Period.



WeLight said:


> I call your disagree and raise it, I have a stats book around here somewhere that says at an absolute min you would need 300 pcs and preferably 1000 to be statisically relavent.


Statistically relevant for what? According to what criteria? 

As SemiMan pointed out, we don't really need to find more than two LEDs that differ more than the bin spread (+7% if you want to be charitable) to show that Cree doesn't fulfill the criteria they give in their pdfs, if we disregard their slightly vague wording. 

Those Cree PDFs appear (as so often seems to be the case nowadays) to serve both as technical reference and as promotional material at the same time. If we regard them in their role of promotional material, we would be fools to blindly assume that they will accurately describe, or indeed describe at all, any limitations in QC. (Compare for instance to manufacturers of high quality bearings, places selling high grade chemicals, precision instrument manufacturers et cetera.) 

It is quite reasonable to test. The resulting data is very relevant to all enthusiasts and small scale manufacturers. We had some interesting early results, and now we're discussing how the tests can be improved without going overboard, to make any conclusions that much more solid. Isn't that how these things are supposed to work?


----------



## Stillphoto (Dec 22, 2007)

No worries Entropy...as Greenie said, Mac's sample size was fine for a quick comparison of Q5's vs. P4's. Mathematically speaking, his sample size is small, but again thats not anything to worry about. As sample size increases, accuracy of the overall picture increases, but here we already have seen enough, and honestly the numbers would be similar. Unless of of course Cree made a change at some point on their assembly line lol. 

*Math and science hat now off* 

Whew, me likey bright lights, more please.


----------



## TorchBoy (Dec 22, 2007)

EntropyQ3 said:


> According to *what* criteria?
> 
> As SemiMan pointed out, we don't really need to find more than two LEDs that differ more than the bin spread (+7% if you want to be charitable) to show that Cree doesn't fulfill the criteria they give in their pdfs, if we disregard their slightly vague wording.


I would say according to *which* criteria? There are two that I can see - lumens at 350 mA and lumens per watt at 350 mA. It's possible to be in-bin for one but not the other, thanks to a hugely varying forward voltage.

FWIW I was taught a minimum sample size to be statistically significant is 30.


----------



## greenLED (Dec 22, 2007)

WeLight said:


> I call your disagree and raise it, I have a stats book around here somewhere that says at an absolute min you would need 300 pcs and preferably 1000 to be statisically relavent. There remains to little data on the test methology.


Wow! I've never heard of using such large sample sizes. I'd spend a lifetime trying to look for that many sample units. I mostly rely on Steel et al. and on Neter et al. as general stats references, but that's just because they're related to my field. What stats textbook did you consult? 


Anyway, what you ultimately want to achieve is the ability to detect a particular difference between two or more entities. To do that, there are many factors to consider. In general, however, the number of replicates depends on:
an initial estimate of variance (more variable populations require larger sample sizes to detect a difference)
the size of the difference to be detected
the power of your test (how confident you are that you will detect a difference)
the level of significance (Type I error - how badly you risk making a "wrong" call)
In reality, other considerations such as availability of sample units, cost, etc. are also to be weighed in. In other words, you work with what you have (at least in my field).

Those are all questions that only Mac can answer - how certain does he want to be in his assertions, and how much resources does he have to conduct such testing?


In my field of study, we're pretty much stuck with "small" sample sizes (30 sample units are HUGE sample sizes). Logistics of the real world usually force us down to a lot less than that. Sometimes you're lucky if you find 3 suitable sample units to apply a treatment on - and that is *the* absolute bare minimum! But, you gotta work with what you have.

So, we can agree to disagree on what the minimum sample size might be here, since it depends on so many different factors, and may depend on which field you're working in, etc. Actually, you could easily compute sample size from the data already at hand and go from there. 

The issue of representativeness of this particular sample was raised previously and should also be considered if you really want to get serious about it. I also agree with your note about sample unit manipulations.


----------



## VidPro (Dec 22, 2007)

myself - meaning me only. i think its great when people test, and rightly i dont care what they think is a good test, or how sofisticated thier testing is.
in the forum they have 30 different light meters that can have variations in color bias and all that stuff, i think that "pro" tests might be better than loose tests, and 2000$ meters might be better than 30$ one.
to bad 
you have the test parameters, the tester does a test for themselves and posts THIER results , i like that, the more the merrier.

i would like to see (meaning only me) anybody having a fit about the test, to do thier test for thier selves , and post the results.

give me more data, and from that i am still capable of making my own descision and doing my own tests.

no mater HOW rotten your equiptement is, your setup is, your skills are, or you think they are or might be, just keep posting resuslts , and pictures, cause they are great. i think you cant cross up what one person does to anothers, they say tomato, you say tamato.

if the heat sinking isnt the same ?? so what, they just showed what happens when the heat isnt the same, the driver isnt perfect, well they just showed more reality in drivers  the point that the light hit isnt similar, heloooowww they just showed that the point that the light hit isnt similar. life is like that, it isnt always in a pitri dish, or a integrating sphere.

everything the people do to try is of value, mabey not to you, but to me., if you dont like it, i would LOVE to see your tests. in the reviews (and on the web) we can have 6 different users review 6 different lights with 6 different results and opinions, and if i buy that light i will have a 7th  that might be different than all the other 6.
so 
all we really need here is 6 different tests of 50 cree pieces, or 2 or 200 or whatever, any volenteers  or would your testing method not be good enough :-( its good enough for me ? pleaseeee

i can test leds with a method you would all freak over, with no light meter. just hit a set of similar leds onto a piece of glow paper, and observe the charge level each does. light something up with a row of them, and take a picture, put them in something and observe any differences there. line up a row of same flashlights against a wall, and take a pic. everybody can test in some way.


----------



## TorchBoy (Dec 22, 2007)

VidPro said:


> i can test leds with a method you would all freak over, with no light meter. just hit a set of similar leds onto a piece of glow paper, and observe the charge level each does. light something up with a row of them, and take a picture, put them in something and observe any differences there. line up a row of same flashlights against a wall, and take a pic. everybody can test in some way.


Or for each LED set a digital camera to fixed aperture, white balance and ISO, and let its centre-weighted averaging light meter figure out what shutter speed it should use. The tint would be seen from the pic itself, and the brightness could be inferred from the shutter speed.


----------



## SemiMan (Dec 22, 2007)

I would say back that the criteria would be by luminous flux and/or by chromaticity co-ordinates. These are the only items actually specified in the binning and labelling document. The lumens/watt is marketing data, not tested data. Flux and chromaticity would be tested. This would be at 350mA.

If you can find any LED that is outside the specifications (tested by the same method), then the quality is suspect. And/or just prove that two are farther than one bin apart if they are in the same reel.

Semiman



TorchBoy said:


> I would say according to *which* criteria? There are two that I can see - lumens at 350 mA and lumens per watt at 350 mA. It's possible to be in-bin for one but not the other, thanks to a hugely varying forward voltage.
> 
> FWIW I was taught a minimum sample size to be statistically significant is 30.


----------



## SemiMan (Dec 22, 2007)

No doubt I will start a debate, but hey, what is a public forum for.

This is a public forum. This forum is no doubt read by a lot of people who have limited knowledge of LED technology and those who have considerable knowledge in this area.

When those who have limited knowledge read a post that shows widely varying results for Cree LEDs, they will often take it is fact. Here is a person who has the equipment, knowledge, and lots of LEDs to do a test.... it must be right. There were those of us with the knowledge and experience to know that there was likely something not right with the results. Unfortunately, the discussion was a little heated (I don't condone calling someone elses results BS), but non the less, the results were accurately called into question and it turned out that test was done measuring the light from the side. With the XR-E, which has a narrower cone of light than most power LEDs, once you hit a certain angle, the drop off is steep and small angle differences could make a huge difference in the results, both in the angle of measurement, but also likely from manufacturing variation. In a typical design, even a flashlight, that variance at that angle is not going to come into play (though from a design standpoint, it may give you rings in your beam pattern if you do not have an appropriate optic).

I have made inaccurate posts on CPF and been called on it. I thanked those who did for pointing out my mistake.....I prefer to make them only once.

Moderators, perhaps we need a thread on light and source testing with a sticky post of best practices for doing tests?

Semiman



VidPro said:


> myself - meaning me only. i think its great when people test, and rightly i dont care what they think is a good test, or how sofisticated thier testing is.
> in the forum they have 30 different light meters that can have variations in color bias and all that stuff, i think that "pro" tests might be better than loose tests, and 2000$ meters might be better than 30$ one.
> to bad
> you have the test parameters, the tester does a test for themselves and posts THIER results , i like that, the more the merrier.


----------



## ICUDoc (Dec 22, 2007)

Hi All
1. Thanks Mac for your results- the data is all helpful. I look forward to the new runs done in a light box or similar, and I find the change in variance when measured close-up quite intriguing.
It might be worth trying to measure the same LED twice and see what variation you see. ie small angle changes in mounting by transition or rotation might see big flux changes.
2.I think an important point to raise is that your data were designed to test XR-e vs XR-e and Rebel vs Rebel, NOT XR-E vs Rebel. The numbers are clearly not comparable.
3. VanIsleDSM pull your head in. Calling comments by Mac "BS" is insulting, rude, against the spirit of this argument and unlikely to win you friends on CPF.
4. The sample size is EASILY big enough to give results BECAUSE THE SAMPLE IS THE POPULATION ie what Mac is putting in his lights comes from the bunch of LEDs he has on his desk, and the spread is the spread. So there is no need to test more LEDs if you are trying to test Mac's hypothesis which is "There seems to be a big spread in LED brightness here". Sure it ain't perfect but it's better than a guess or an eyeball estimate.
5. When I test my Leds I poke them into a light box on a CC driver and move them about to measure the peak flux (there is no direct or one reflection path to the meter: multiple diffusers). This lets me compare different sources regardless of their emission pattern +/- reflectors etc. So it ain't perfect either but seems to give sensible numbers. I will try some Q5s tonight and post.
6. (Never say there are X answers to an exam question because you look silly if you get to X-1 and then forget the last one.)
7. This thread made me crankier than The Group 5 HID one did.
Anyhow Merry Christmas all.


----------



## PlayboyJoeShmoe (Dec 22, 2007)

I tended to believe Mac from the start.

Now that he modified the test I still believe.

Heck, I thought the Luxeon lottery was behind us. But I've gotten a couple of pink/purple LUXIII lately so it's still in play.

We have also heard of Task Force 2C Cree brightness differences, and modders putting Q5 in in place of lower bins and seeing no major difference.

With an leading/bleeding edge product there are likely to be issues.

Mac has shown us what some of these issues are.

:goodjob:


----------



## TorchBoy (Dec 23, 2007)

SemiMan said:


> When those who have limited knowledge read a post that shows widely varying results for Cree LEDs, they will often take it is fact. Here is a person who has the equipment, knowledge, and lots of LEDs to do a test.... it must be right. There were those of us with the knowledge and experience to know that there was likely something not right with the results. Unfortunately, the discussion was a little heated (I don't condone calling someone elses results BS), but non the less, the results were accurately called into question and it turned out that test was done measuring the light from the side. *With the XR-E, which has a narrower cone of light than most power LEDs, once you hit a certain angle, the drop off is steep and small angle differences could make a huge difference in the results, both in the angle of measurement, but also likely from manufacturing variation.* In a typical design, even a flashlight, that variance at that angle is not going to come into play (though from a design standpoint, it may give you rings in your beam pattern if you do not have an appropriate optic).


Thank you for a good summary of what was apparently going on (at least, by my present understanding). If the first explanation of the test had simply mentioned those three little words - "from the side" - most of the argument would have been avoided, since we could have then accepted the test results for what they actually were, which is measurements from a point where high variation could be expected. This highlights the need to explain exactly what was done, how it was measured, etc.

I still don't know what sort of area the meter measures over.


----------



## VanIsleDSM (Dec 23, 2007)

Apparently some view "BS" as some pretty strong wording.. I don't.. so sorry if that offended you, I'll refrain some using that specific term.

A more polite synonym wasn't the first thing to pop into my mind when I saw the 3000-6000k claim.. how about misinformation.. that sounds better.

As earlier stated, you don't want people with little LED knowledge taking that as fact..


----------



## VanIsleDSM (Dec 23, 2007)

TorchBoy said:


> Or for each LED set a digital camera to fixed aperture, white balance and ISO, and let its centre-weighted averaging light meter figure out what shutter speed it should use. The tint would be seen from the pic itself, and the brightness could be inferred from the shutter speed.



That's a good idea.


----------



## easilyled (Dec 23, 2007)

VanIsleDSM said:


> Apparently some view "BS" as some pretty strong wording.. I don't.. so sorry if that offended you, I'll refrain some using that specific term.
> 
> A more polite synonym wasn't the first thing to pop into my mind when I saw the 3000-6000k claim.. how about misinformation.. that sounds better.
> 
> As earlier stated, you don't want people with little LED knowledge taking that as fact..



The definition of "BS" is as follows according to the Cambridge Dictionary:-

Definition
bullshit Show phonetics
exclamation, noun  OFFENSIVE
complete nonsense or something that is not true:
Bullshit! He never said that!
He gave me some excuse but it was a load of bullshit.

bullshit Show phonetics
verb [I or T] -tt- OFFENSIVE
to try to persuade or impress someone by saying things that are not true:
You're bullshitting me!
Quit bullshitting, will you!

bullshitter Show phonetics
noun [C] OFFENSIVE

(from Cambridge Advanced Learner's Dictionary)



Seems the Cambridge Dictionary considers it OFFENSIVE.

For someone who considers himself smart, you are not going about things
in a particularly astute fashion.

Or should we just give you the benefit of the doubt and assume that
your command of English is poor.


----------



## cmacclel (Dec 23, 2007)

VanIsleDSM said:


> Apparently some view "BS" as some pretty strong wording.. I don't.. so sorry if that offended you, I'll refrain some using that specific term.
> 
> A more polite synonym wasn't the first thing to pop into my mind when I saw the 3000-6000k claim.. how about misinformation.. that sounds better.
> 
> As earlier stated, you don't want people with little LED knowledge taking that as fact..





I have had puple to yellow tinted LED's from the same reel. Maybe they are not 3000-6000k as I have no way of measuring color temprature but I do know that I have RIPPED apart finished tri-led builds to replace emitters that where very different than the rest.

For all the people that are saying that I should have done this or I should have done that well next time (if there is a next time) I'll be sure to post pictures of my test setup.

There has to be variations in the Cree LED itself which showed itself in the way I tested my LED's.

If each led was placed exactly in the same position you would think it would not matter WHERE the light meter was measuring to get valid *comparitive* results. I found out this is not the case with the XR-E LED's for some reason.

The Rebels on the other hand had comparible results no matter where the light meter sensor was placed.


Mac


----------



## TorchBoy (Dec 23, 2007)

cmacclel said:


> next time (if there is a next time) I'll be sure to post pictures of my test setup.


Awesome! :twothumbs


----------



## VanIsleDSM (Dec 23, 2007)

easilyled said:


> The definition of "BS" is as follows according to the Cambridge Dictionary:
> 
> ....
> 
> ...



Now I'm being directly insulted? Does the cambridge dictionary take my specific context into account? Regional differences and social trends? ..don't be ridiculous my friend.

I already explained that I didn't mean it in an offensive way and apologized if it was taken as so.. now you've gone another step further.. I'll bite my tongue about your 'smart' comment and how that pertains to your own lingual command.... and we should just get back to relevant information now.. ya?


----------



## PlayboyJoeShmoe (Dec 23, 2007)

Thanks Mac! 

And maybe I should be looking at DX Rebel lights instead of Cree lights...


----------



## znomit (Dec 23, 2007)

The original posting really need an oops appended to it.


----------



## cmacclel (Dec 23, 2007)

znomit said:


> The original posting really need an oops appended to it.



Done

And I really don't think it needs an Oops. 

Who would have thought that you could not compare the *same* brand, *same* Bin LED's regardless of where the light meter sensor is. That only leads me to believe that the XR-E Tolerances vary GREATLY.

Mac


----------



## TorchBoy (Dec 24, 2007)

Mac, if anything, your test _showed_ that Crees can't be comparitively measured from the side, so just your conclusion needs the change. The test itself is still useful, not invalid for that purpose at all.


----------



## Grumpy (Dec 24, 2007)

Mac, 

Thanks for doing this test. I enjoy reading about the tests that people here at CPF do. I also enjoyed following this thread. It definately had some interesting findings.

I appreciate all of the people that take the time to do tests, post beam shots, reviews, etc.


----------



## SAVAGESAM (Jan 4, 2008)

Semiman, I agee with SOME of what you said, however there are some of us who know we don't know enough to believe as truth or assume as fact what we read/hear. Vidpro great post. The more tests the better. I learned alot reading every post here. The more I learn the more I realize how much MORE I don't know. I also agree, IMHO, you shouldn't come to any forum whether new or well seasoned and call someones results etc. BS. I'm new here still and I thank you all for your posts and testing etc. I feel it's like getting schooled for free:thumbsup:


----------



## IMSabbel (Jan 4, 2008)

SAVAGESAM said:


> Semiman, I agee with SOME of what you said, however there are some of us who know we don't know enough to believe as truth or assume as fact what we read/hear. Vidpro great post. The more tests the better. I learned alot reading every post here. The more I learn the more I realize how much MORE I don't know. I also agree, IMHO, you shouldn't come to any forum whether new or well seasoned and call someones results etc. BS. I'm new here still and I thank you all for your posts and testing etc. I feel it's like getting schooled for free:thumbsup:



As a scientist, i can asure you that its the very best of form to call BS if you see it. Because it can happen even in the highest places (Science and Nature retract papers, too).

And more tests are WORSE then no tests if they are faulty and just spread disinformation.


----------



## saabluster (Jan 4, 2008)

IMSabbel said:


> As a scientist, i can asure you that its the very best of form to call BS if you see it.



True. But there are kinder ways of saying it.


----------



## [email protected] (Jan 4, 2008)

Please bicker on in the UG, but behave nicely on CPF...

:thanks:


----------

