**7. Statistical analysis of experimental data**

As important it is to design and execute meaningful investigations in understanding the effects of toxic agents on cellular functions, it is also important to do them with statistical validation. Experiment results need to be accurate and consistent within the framework of the defined system. Reproducibility of protocols and outcomes are important in formulating further scope for any research. In this endeavor, several statistical tools can be employed which include the determination of average, standard deviation, p value and range of error.

Especially with studying toxicity from nanoparticle exposure, the number of exposing agents is exponentially growing every year. As such it becomes time and cost consuming to exact the required number of experiments to test each particle for various routes of exposure. This necessitates the need for development of adequate and accurate prediction tools for toxicity. A raising technique today is the advent of quantitative structure activity-based relationship (QSAR) models [31]. A response curve is defined, and various physiochemical and biological parameters are gathered or determined and fitness to response is evaluated. For *in vitro* experiments the R2 (correlation coefficient) of any such effort must be greater than 0.81. Such tools help screen large numbers of particles for their toxic outcomes. This ensure only the most promising leads are further evaluated with actual we laboratory experiments, in process, saving resources, time and funds.
