MatthewReid854/reliability

How to use reliability package to answer if test time can be reduced or should be augmented

Closed this issue · 2 comments

Find this package gem yesterday, and was reading documentation trying to figure it out if I can use it to adjust stress test times.
In https://reliability.readthedocs.io/en/latest/Working%20with%20fitted%20distributions.html was reading:

If you want to know the system reliability at a certain time, then you are specifying time (CI_x) and seeking bounds on reliability (CI_type=’reliability’). If you want to know the time that the system will reach a certain reliability, then you are specifying reliability (CI_y) and seeking bounds on time (CI_type=’time’)

Then using my data the Weibull_Mixture
image

image

Was trying to use:

from reliability.Fitters import Fit_Weibull_Mixture
fit = Fit_Weibull_Mixture(failures=failures,show_probability_plot=True,print_results=True)

image

My expectation was to use fit.distribution.CDF(CI_type='reliability',CI_x=660*60/2) to know the reliability if I decrease test time by half but I go the AttributeError: 'Line2D' object has no property 'CI_type' error below.

What am not sure and I appreciate feedback is if I could use the results of the best_distribution to try to answer questions like:

  1. What could be the steady point in test time where the failure rate became constant?
    1. This point could be less than actual test time or could be bigger than actual test time
    2. Assume here the hazard function could be of help?
  2. If I cut test time and having reliability of 80%, then could I should be able to know how many failures am going to stop detecting?

Appreciate reference material to read on this topic as well.

image

Here the time to fail data am using:

[24549,
 82,
 5309,
 135,
 13125,
 32108,
 12570,
 486,
 28465,
 626,
 32052,
 14011,
 7188,
 1591,
 37053,
 19333,
 19083,
 20796,
 15837,
 37488,
 28130,
 12318,
 30938,
 10674,
 464,
 346,
 4105,
 3448,
 10392,
 383,
 2824,
 1041,
 22568,
 21656,
 6120,
 12203,
 1230,
 1390,
 5222,
 7386,
 586,
 4053,
 4455,
 5000,
 2358,
 1137,
 854,
 849,
 17408,
 358,
 230,
 754,
 25879,
 28840,
 248,
 1922,
 1939,
 8725,
 8359,
 5686,
 2477,
 3758,
 442,
 398,
 32171,
 13588,
 7327,
 620,
 1772,
 1452,
 18063,
 622,
 640,
 831,
 1911,
 846,
 1797,
 36906,
 5042,
 4186,
 1347,
 1440,
 629,
 1145,
 5422,
 5667,
 7549,
 397,
 871,
 324,
 739,
 2102,
 2470,
 291,
 21732,
 2309,
 18039,
 652,
 615,
 636,
 2047,
 35802,
 2937,
 706,
 1031,
 686,
 819,
 25152,
 395,
 11606,
 16405,
 9425,
 15415,
 11473,
 8188,
 10989,
 4678,
 6069,
 2074,
 666,
 4504,
 1273,
 909,
 1486,
 839,
 103,
 32720,
 36302,
 19148,
 12867,
 18130,
 5251,
 1189,
 5487,
 1398,
 4779,
 1737,
 4606,
 5753,
 671,
 7606,
 2226,
 584,
 39081,
 31079,
 34869,
 17202,
 25166,
 16673,
 28399,
 12083,
 7473,
 2409,
 556,
 8003,
 1236,
 5848,
 937,
 2178,
 2411,
 1197,
 250]

The best distribution to model your data is certainly the Weibull Mixture as shown on the probability plot below.
Figure
Notice how some distributions don't have confidence intervals displayed? This is because confidence intervals have not been implemented in reliability for some of the more complicated models like Weibull_Mixture. That's why you get an error when you specify CI_type since this value isn't accepted by the Weibull_Mixture model. reliability treats it as a keyword to be passed to matplotlib which is why it throws an AttributeError for the line 2D object.

To answer your other questions;
What could be the steady point in test time where the failure rate became constant? ==> look at the Hazard function. It seems to become pretty level (indicating a relatively constant failure rate) at around 50000 - 100000. You can view the HF further to the right by setting xmax: fit.distribution.HF(xmax = 200000)
Figure_2
It's hard to make this claim because the data is all below 50000 so drawing conclusions about the model at 100000 is a bit speculative. It also never fully levels out so it's always slightly increasing after that initial drop.

If I cut test time when the reliability reaches 80%, then could I know how many failures I am going to stop detecting? ==> You can still fit a model to less data but the more data you have the more accurate the model. If you stop testing once reliability reaches 80% (i.e. 20% of the population has failed) then the number of failures you're going to stop detecting will be the remaining 80% of the population. Everything fails eventually.

I see no use of right censored data in your data set. If you have unfailed items then you really should include the amount of time they survived as their right censored time. The model may change significantly when you do this.

For all future questions about "how do I do X" or "how does X work", please email me (alpha.reliability@gmail.com). GitHub issues are only for bugs and errors. I understand you may have thought there was a bug based on the AttributeError but this is just poor error messaging on my part rather than an actual error.