×
DON'T LET YOURSELF DOWN
--Your friends at LectureNotes
Close

Note for Electrical and Electronics Measurement - EEM by Jitendra Pal

  • Electrical and Electronics Measurement - EEM
  • Note
  • Biju Patnaik University of Technology BPUT - BPUT
  • Electronics and Communication Engineering
  • B.Tech
  • 211 Views
  • Uploaded 4 months ago
0 User(s)
Download PDFOrder Printed Copy

Share it with your friends

Leave your Comments

Text from page-1

LECTURE NOTES On Electrical & Electronics Measurement (PCEE4204) 3rd Semester ETC Engineering Prepared by, Priyabrata Sethy Prasanta Kumar Sahu Ramachandra Dalei INDIRA GANDHI INSTITUTE OF TECHNOLOGY, SARANG

Text from page-2

Module-1 Accuracy and precision are defined in terms of systematic and random errors. The more common definition associates accuracy with systematic errors and precision with random errors. Another definition, advanced by ISO, associates trueness with systematic errors and precision with random errors, and defines accuracy as the combination of both trueness and precision. In the fields of science, engineering, industry, and statistics, the accuracy of a measurement system is the degree of closeness of measurements of a quantity to that quantity's actual (true) value.[1] The precision of a measurement system, related to reproducibility and repeatability, is the degree to which repeated measurements under unchanged conditions show the same results.[1][2] Although the two words precision and accuracy can be synonymous in colloquial use, they are deliberately contrasted in the context of the scientific method. A measurement system can be accurate but not precise, precise but not accurate, neither, or both. For example, if anexperiment contains a systematic error, then increasing the sample size generally increases precision but does not improve accuracy. The result would be a consistent yet inaccurate string of results from the flawed experiment. Eliminating the systematic error improves accuracy but does not change precision. A measurement system is considered valid if it is both accurate and precise. Related terms include bias (non-random or directed effects caused by a factor or factors unrelated to the independent variable) and error (random variability). The terminology is also applied to indirect measurements—that is, values obtained by a computational procedure from observed data. In addition to accuracy and precision, measurements may also have a measurement resolution, which is the smallest change in the underlying physical quantity that produces a response in the measurement. In numerical analysis, accuracy is also the nearness of a calculation to the true value; while precision is the resolution of the representation, typically defined by the number of decimal or binary digits. Accuracy is also used as a statistical measure of how well a binary classification test correctly identifies or excludes a condition. That is, the accuracy is the proportion of true results (both true positives and true negatives) in the population. To make the context clear by the semantics, it is often referred to as the "Rand Accuracy". It is a parameter of the test.

Text from page-3

On the other hand, precision or positive predictive value is defined as the proportion of the true positives against all the positive results (both true positives and false positives) An accuracy of 100% means that the measured values are exactly the same as the given values. Also see Sensitivity and specificity. Accuracy may be determined from Sensitivity and Specificity, provided Prevalence is known, using the equation: The accuracy paradox for predictive analytics states that predictive models with a given level of accuracy may have greater predictive power than models with higher accuracy. It may be better to avoid the accuracy metric in favor of other metrics such as precision and recall.[citation needed] In situations where the minority class is more important, F-measuremay be more appropriate, especially in situations with very skewed class imbalance. Another useful performance measure is the balanced accuracy which avoids inflated performance estimates on imbalanced datasets. It is defined as the arithmetic mean of sensitivity and specificity, or the average accuracy obtained on either class: If the classifier performs equally well on either class, this term reduces to the conventional accuracy (i.e., the number of correct predictions divided by the total number of predictions). In contrast, if the conventional accuracy is above chance only because the classifier takes advantage of an imbalanced test set, then the balanced accuracy, as appropriate, will drop to chance.[5] A closely related chance corrected measure is: [6] A direct approach to debiasing and renormalizing Accuracy is Cohen's kappa, whilst Informedness has

Text from page-4

been shown to be a Kappa-family debiased renormalization of Recall.[7]Informedness and Kappa have the advantage that chance level is defined to be 0, and they have the form of a probability. Informedness has the stronger property that it is the probability that an informed decision is made (rather than a guess), when positive. When negative this is still true for the absolutely value of Informedness, but the information has been used to force an incorrect response Error Analysis and Significant Figures Errors using inadequate data are much less than those using no data at all. C. Babbage] No measurement of a physical quantity can be entirely accurate. It is important to know, therefore, just how much the measured value is likely to deviate from the unknown, true, value of the quantity. The art of estimating these deviations should probably be called uncertainty analysis, but for historical reasons is referred to as error analysis. This document contains brief discussions about how errors are reported, the kinds of errors that can occur, how to estimate random errors, and how to carry error estimates into calculated results. We are not, and will not be, concerned with the ―percent error‖ exercises common in high school, where the student is content with calculating the deviation from some allegedly authoritative number. You might also be interested in our tutorial on using figures (Graphs). Significant figures Whenever you make a measurement, the number of meaningful digits that you write down implies the error in the measurement. For example if you say that the length of an object is 0.428 m, you imply an uncertainty of about 0.001 m. To record this measurement as either 0.4 or 0.42819667 would imply that you only know it to 0.1 m in the first case or to 0.00000001 m in the second. You should only report as many significant figures as are consistent with the estimated error. The quantity 0.428 m is said to have three significant figures, that is, three digits that make sense in terms of the measurement. Notice that this has nothing to do with the "number of decimal places". The same measurement in centimeters would be 42.8 cm and still be a three significant figure number. The accepted convention is that only one uncertain digit is to be reported for a measurement. In the example if the estimated error is 0.02 m you would report a result of 0.43 ± 0.02 m, not 0.428 ± 0.02 m.

Lecture Notes