4. Conclusion

filters was modified in order to observe what happens if the layer is sensitive to more features or if it is sensitive to features that are not relevant for all the types of HHG experiments. Thirdly, several pooling methods have been tested for the pooling layers in each network, namely, the classical max pooling, the average pooling and the stochastic pooling. Last but not least, the dropout and constructive learning algorithms were applied on the fully connected layer, resulting in more CNN configurations. For efficiency purposes, regularization methods such as L2 [119] and elastic net regularization [120] were applied to all the convolutional layers and to the fully connected layer when some of the weights were observed to peak excessively. The objective was to force the layers of the CNN to make use of all of their inputs at the same rate (as much as possible) rather than to use portions of their inputs preferentially. However, the risk is ending up in having a network layer with neuron weights that are "diffuse" and rather small. Elastic net regularization—a combination between L1 and L2 types—proved to be more efficient than either of the two. Ensemble learning was also deployed, just as before, averaging over either the predictions offered by all networks, either by applying the average on the best performing 10% of the configurations. The best performing three configurations are labeled CNN1, CNN2 and CNN3, respectively. All the networks take the same input size, namely the 20 20 20 volume described above and were exposed to the elastic net regularization. Their configurations are as follows. CNN1 has four convolutional layers. The first one has 128 filters and a filter size of 5 5 20, the second and third convolutional layers have 256 filters but a smaller filter size, more precisely 3 3 20. Finally, the fourth convolutional layer has 512 filters and the same filter size as the latter two. After the first and the third layers, a pooling layer was introduced. The pooling layers use stochastic pooling. The network's architecture ends with a fully connected 3D cubic layer with 1024 units. It can be noticed that when applying a cubic root to this value, the resulting number of units on each dimension is not an integer. This is because, dropout and constructive learning were applied to the fully connected layer resulting in either vacancies, either insertion of neurons into the volume and in an overall addition of 24 units. The training of CNN1 was done in batches of 512 examples per gradient step with stochastic gradient descent used for the cost function optimization along with the bespoke backpropagation of errors. CNN2 has five convolutional layers, also optimized with elastic net regularization, the first four ones being identical to CNN1's. The fifth layer has, 512 filters, a filter size of 3 3 20 and it is followed by the sole pooling layer of CNN2. This pooling layer also employs stochastic pooling. The network's architecture ends with two fully connected 3D cubic layers with 1024 units each but with different configurations of neurons within the layers' volumes. This is again due to dropout and constructive learning applied to the fully connected layers. The training of CNN2 was done in the same way but the cost function optimization was achieved via Levenberg–Marquardt. Last but not least, CNN3 has also five convolutional layers (elastic net regularization was applied to the weights), with the first layer having 126 filters and the same 5 5 20 filter size. The second and the third layers have 252 filters, the second having a 5 5 20 filter size and the third a 3 3 20. The fourth and the fifth have 504 filters with the same filter size as the previous one. CNN3 has just one pooling layer in between the fourth and the fifth layers, which makes use of max pooling. The last convolutional layer is followed by two fully connected 768 units layers that were subject to dropout and constructive learning. The training was done also in batches, the stochastic gradient descent being employed with the AdaDelta adaptive learning method [121]. CNN1 and CNN2 use a stride of one for all the

106 Machine Learning - Advanced Techniques and Emerging Applications

Technological advances in the field of laser -plasma interaction and diagnostics have provided the scientific community with lots of data. Within the last few years we have been experiencing a continuously upraising accessibility, not only to storage space and increased computer power, but also to a multitude of readily-built and easily modifiable open-source software libraries. It is thus becoming less and less problematic to exploit and explore this already available information in ways that have never been attempted before.

This paper proposes an alternative to the classical plasma kinetics simulations. Acknowledging the potential innovative technologies like cloud computing, big data, machine learning and, ultimately, the deep learning have for science, the author showed how these can be used for predictive modeling of laser-plasma interaction scenarios, with a focus on high harmonics generation. The deployment of the presented systems has the potential of yielding better predictive analytics and hence optimized laser-plasma interaction experiments, by offering a fair estimation of interaction conditions or insights on different phenomena occurring during the laser-plasma interaction.

[10] Pfund RE et al. LPIC++ a parallel one-dimensional relativistic electromagnetic particlein-cell code for simulating laser-plasma interaction. AIP Conference Proceedings. 1998;

Overcoming Challenges in Predictive Modeling of Laser-Plasma Interaction Scenarios. The Sinuous Route from…

http://dx.doi.org/10.5772/intechopen.72844

109

[11] Lichters R et al. Short-pulse laser harmonics from oscillating plasma surfaces driven at

[12] Verboncoeur JP et al. An object-oriented electromagnetic PIC code. Computer Physics

[13] Burau H et al. PIConGPU: A fully relativistic particle-in-cell code for a GPU cluster.

[14] Brady C et al. EPOCH, an open source PIC code for high energy density physics, user manual for the EPOCH PIC codes version 4.3.4, University of Warwick, collaborative

[16] Fonseca RA et al. OSIRIS: A three-dimensional, fully relativistic particle in cell code for modeling plasma based accelerators. In: Computational Science-ICCS 2002, Series Lecture Notes in Computer Science. Vol. 2331. Berlin/Heidelberg: Springer; 2002. pp. 342-351 [17] Fonseca RA et al. One-to-one direct modeling of experiments and astrophysical scenarios: Pushing the envelope on kinetic plasma simulations. Plasma Physics and Controlled

[18] Fiuza F et al. Efficient modeling of laser–plasma interactions in high energy density

[19] Huang C et al. Quickpic: A highly efficient particle-in-cell code for modeling wakefield

[20] An W et al. An improved iteration loop for the three dimensional quasi-static particle-in-

[21] Tzoufras M et al. A Vlasov-Fokker-Planck code for high energy density physics. Journal

[22] Tzoufras M et al. A multi-dimensional Vlasov-Fokker-Planck code for arbitrarily aniso-

[23] Owens JD et al. A survey of general-purpose computation on graphics hardware. Com-

[24] Owens JD et al. GPU computing, graphics processing units-powerful, programmable and highly parallel—are increasingly targeting general-purpose computing applica-

[25] Fatahalian K, Houston M. A closer look at GPUs. Communications of the ACM. 2008;51(10):

relativistic intensity. Physics of Plasmas. 1996;3:3425

IEEE Transactions on Plasma Science. 2010;38(10):2831

[15] Vsim [Internet]. 2016. Available from: https://www.txcorp.com/vsim

scenarios. Plasma Physics and Controlled Fusion. 2011;53:074004

acceleration in plasmas. Journal of Computational Physics. 2006;217:658

cell algorithm: Quickpic. Journal of Computational Physics. 2013;250:165

tropic high-energy-density plasmas. Physics of Plasmas. 2013;20:056303

computational project in plasma. Physics. 2015

of Computational Physics. 2011;230:6475

puter Graphics Forum. 2007;26:80-113

50

tions. Proceedings of the IEEE. 2008;96:879

Communications. 1995;87:199

Fusion. 2008;50:124034

426:141
