**10. Conclusion and outlook**

In this chapter, we have summarised what neuromorphic computing reaches until today, discussing its materials, algorithms, circuits and applications. Although neuromorphic devices show promising characteristics, they still have some challenges to be solved to realise energy-efficient neuromorphic systems. Neuromorphic computing technology has created high expectations and its market size expands every day in different fields.

One of the markets for neuromorphic chips that is expanding the quickest is the auto business. All of the top vehicle producers are working hard to achieve Level 5 of vehicle autonomy, which is anticipated to result in enormous demand for AI-controlled neuromorphic processors. The self-governing driving industry needs constant advancements in AI calculations for maximum throughput and minimal force requirements. Neuromorphic chips are excellent for order-related tasks and could be applied in a few instances to self-driving cars. Compared to static deep learning arrangements, they are also proficient in loud climate like self-driving vehicles. According to Intel, four terabytes is the estimated amount of information that a standalone vehicle might produce of approximately 60 min and portion of driving, or the amount of time a person spends in their vehicle on average every day. The ability of self-ruling vehicles to effectively handle all of the data generated during these excursions will be put to the test.

The ADAS (Advanced Driver Assistance System) applications include picture learning and recognition work among other uses of neuromorphic chips in vehicles. It functions similarly to one of the standard ADAS features found in passenger vehicles, such as cruise control or intelligent speed assistance. By sensing the traffic data set aside on streets, such as crosswalks, school zones, street knocks, and so forth, it can control vehicle speed.

Some of the major market players, like Intel Corporation and IBM Corporation, are based in North America. Due to factors including governmental initiatives, speculative activities, and others, the market for neuromorphic chips is expanding in the area. For instance, the Department of Energy (DOE) reported funding of USD 2 million for five crucial exploration projects to advance neuromorphic registration in September 2020. The DOE's initiative supports the development of software and hardware for mind-enhanced neuromorphic registering.

The industry is also growing as a result of the scaling down of neuromorphic chips that help in numerous applications. For instance, MIT engineers expected to release a mind-on-chip in June 2020 that was smaller than a piece of confetti and was made using a sizable number of memristors. Such chips can be utilised in portable AI devices. The Canadian government is also focusing on artificial intelligence technology, which will support advancements in neuromorphic figuring throughout the coming years. For instance, the legislatures of Canada and Quebec joined together in June 2020 to foster the thoughtful development of AI. The focus will be on a variety of topics, including solid AI, commercialization, information management, and future work and development.

#### *Neuromorphic Computing*

There is a tremendous amount of interest in organisations' innovative work exercises. For instance, Intel and Sandia National Laboratories collaborated in October 2020 to study the value of neuromorphic processing for escalating computational issues. Sandia National Laboratories is one of three National Nuclear Security Administration innovative work research centres in the United States.

The penetration of neural-based chipsets in well-known applications is another factor driving market expansion. For instance, one of the most important technology companies, Apple, released the M1 Chip in November 2020 with the specific purpose of being used with its Mac products. The M1 Chip speeds up AI assignments and transports Apple Neural Engine to the Mac. The 16-center architecture can complete 11 trillion tasks per second, enabling up to 15 times faster ML execution.
