An Information Theoretic approach is used for studying the effect of noise on various spiking neural systems. Detailed statistical analyses of neural behavior under the influence of stochasticity are carried out and their results related to other work and also biological neural networks. The neurocomputational capabilities of the neural systems under study are put on an absolute scale. A proof of-concept algorithm is designed, based on information theory and the coding fraction, which optimises noise through maximising information throughput. The algorithm is applied with success to a single neuron and then generalised to an entire neural population with various structural characteristics (feedforward, lateral, recurrent connections). It is shown that there are certain positive and persistent phenomena due to noise in spiking neural networks and that these phenomena can be observed even under simplified conditions and therefore exploited. The transition is made from detailed and computationally expensive tools to efficient approximations. These phenomena are shown to be persistent and exploitable under a variety of circumstances.