In Focus - Issue 34 (Spring 2022)

by up to , times. At the same time, so ware-hardware co-designed solutions will enable companies of all sizes, including start-ups and smaller firms, to rapidly deploy their own customized AI-driven applications. The pioneering center is being supported under the Hong Kong government’s collaborative InnoHK Clusters initiative, receiving HK$ . million in initial funding. Participating universities comprise HKUST, Stanford University, the University of Hong Kong, and Chinese University of Hong Kong. “Just think of all the sectors wanting AI acceleration, from transportation to fintech, medicine to education. It represents a massive opportunity for research, business, and social impact” In recent years, advances in AI and machine learning accuracy have given rise to a greater number of AI-driven applications in areas such as face and object recognition, natural language processing, and autonomous vehicles. However, such developments have come at a price, with increased accuracy involving large computing and memory resources, long training periods for AI models, and major expense, limiting widespread use of such applications. ACCESS, established in , is setting out to change this. Customized computing chips for AI applications, also known as AI accelerators, are processors designed to speed up artificial intelligence and machine learning applications, including internet of things, computer vision, and other data-intensive or sensor-driven tasks. “Companies want to cram more and more intelligence into today’s sensors and devices to enhance their functions,” said Prof. Cheng, an internationally recognized leader in 13 IN FOCUS electronic design automation, integrated circuit design, and computer vision. “They need powerful but small, energy-efficient AI chips to carry out specific and ubiquitous tasks. But such embedded intelligence is not yet widely available.” The center’s research areas address four key technical areas: enabling technology for emerging computer systems, architecture and heterogeneous system integration, AI-assisted electronic design automation for AI hardware, and hardware-accelerated AI applications. Among its projects to date, teams are developing a new generation of computing in memory (CIM) chips that serve as independent data processors, eliminating the need for the chip to send data to the cloud or a central server for analysis and processing and then return the data. Getting rid of these steps can make the CIM chips hundreds of times faster, with current literature and results indicating that ACCESS chips are three times more efficient than the best-performing CIM chips currently available. The center has also successfully designed a new generation of optimized neural network prototypes on computer architecture and hardware models and completed verification on a Field Programmable Gate Array (FPGA) platform. It is expected that final design and manufacturing of ultra-low-power chips will be completed in the second half of . A major goal of the center is to produce customized chips with a small design team to accelerate time to market.

RkJQdWJsaXNoZXIy NDk5Njg=